Sunday, January 22, 2017

Live Forensics and the Cloud Part II

Cloud Computing offers a sense of "vastness" in terms of storage and remote processing. According to Simpson Garfinkil, a major challenge to any digital forensics investigator investigating data within the cloud; can be an inability to locate or identify data or code that is lost when single data structures are split into elements.

This in effect directly impacts forensic visibility.

Within this ecosystem a major concern can be access to and the preservation of data within an on-going digital forensic investigation. Of consideration as mentioned in Part 1 - is that in a live and dynamic system such as the cloud, it is virtually impossible to go back to an original state of data after obtaining a "snapshot" for investigation.

Also of importance will be jurisdictional and legal ramifications pertaining to the physical location of the cloud systems holding data under investigation.

This part of the article continues from the question, "How can an investigator identify and track such an issue?" It looks at identity within the cloud with regard to the issue of anonymous authentication and how it can impact a digital forensic investigation.

Going a bit back in time we can reference provenance as detailed in a paper published in 2001 by Clifford A Lynch.

Lynch proposed a utilization of tools that allowed for the determination of the source of identity of a person or organization, standing behind a metadata assertion. Consequently this assumption allows for the development of trust in an entity's identity.

Per Foster Zhao Raicu and Lu; provenance references any data product's derivation history. It includes "all the data sources, intermediate data products, and the procedures that were applied to produce the data product." In other words it's somewhat of an "audit trail".
Foster el al also stated that with regard to the cloud that could be existential challenges with an audit trail stemming from "issues such as tracking data production across different service providers (with different platform visibility and access policies) and across different software and hardware abstraction layers within one provider."

Researchers Lu, Lin, Liang and Shen took the process of provenance as suggested by Lynch a step further and proposed that cloud computing should provide provenance "to record ownership and process history of data objects in the cloud," on the assumption that "given its provenance, a data object can report who created and who modified its contents." 

This of course will greatly impact the outcome of a digital forensic investigation being conducted by providing some sort of accountability and in a best case a process and user-related footprint.

The Researchers' also stressed that in order to ensure the integrity of data; the data should be secured i.e. secure provenance.

Thus the concept of "secure provenance should satisfy requirements of

•1) Unforgeability and 

•2) Conditional privacy preservation where only a trusted authority has the ability to reveal the real identity recorded in the provenance."

The researchers' model proposed a fully secure provenance SP scheme for cloud computing, in a five part process as follows:



"A secure provenance scheme SP is defined by the following algorithms: system setup, key generation, anonymous authentication, authorized access, and provenance tracking : - Setup, KGen, AnonyAuth, AuthAccess, and ProveTrack."
According to the outcome of this paper this system will provide "trusted evidence for data forensics in cloud computing," as applied into a real world cloud ecosystem where if any issues occur, a system manager (SM) can calculate a provenance chain of command by utilizing the provenance tracking algorithm, resulting in an ability to track a specific user identity.

No comments:

Post a Comment