Download presentation
Presentation is loading. Please wait.
Published byMarianna Fletcher Modified over 6 years ago
1
Ensuring data storage security in cloud computing
Submitted By: Sheth M.Ovesh Under the Guidance of: Asist.Prof. Ajay Kumar Sharma M.Tech
2
A Working Definition of Cloud Computing
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
3
5 Essential Cloud Characteristics
On-demand self-service Broad network access Resource pooling Location independence Rapid elasticity Measured service 1.On-Demand Self-Service: ->Completely automated ->Users abstracted from the implementation ->Near real-time delivery (seconds or minutes) ->Services accessed through a self-serve web interface 2.Broad Network Access: ->Open standards and APIs ->Almost always IP, HTTP, and REST ->Available from anywhere with an internet connection 3.Shared / Pooled Resources: ->Resources are drawn from a common pool ->Common resources build economies of scale ->Common infrastructure runs at high efficiency 4.Scalable and Elastic: ->Resources dynamically-allocated between users ->Additional resources dynamically-released when needed ->Fully automated 5.Metered by Use: ->Services are metered, like a utility ->Users pay only for services used ->Services can be cancelled at any time
4
Cloud Objectives -Correctness -Integrity -Flexibility -Maintainability
-Accessibility -Availability
5
SYSTEM ARCHITECTURE
6
Existing System Traditional cryptographic primitives for the purpose of data security protection can not be directly adopted due to the users’ loss control of data under Cloud Computing. Therefore, verification of correct data storage in the cloud must be conducted without explicit knowledge of the whole data. The data stored in the cloud may be frequently updated by the users, including insertion, deletion, modification, appending, reordering, etc. To ensure storage correctness under dynamic data update is hence of paramount importance. None of the distributed schemes is aware of dynamic data operations. As a result, their applicability in cloud data storage can be drastically limited.
7
Proposed System we propose an effective and flexible distributed scheme with explicit dynamic data support to ensure the correctness of users’ data in the cloud. We rely on ensure correcting code in the file distribution preparation to provide redundancies and guarantee the data dependability. By utilizing the homomorphic token with distributed verification of ensure-coded data, our scheme achieves the storage correctness insurance as well as data error localization. Unlike most prior works for ensuring remote data integrity, the new scheme supports secure and efficient dynamic operations on data blocks, including: update, delete and append.
8
Windows Azure Windows Azure is a foundation of Microsoft’s Cloud Platform for Developers Operating System for the Cloud Runs applications in the cloud Provides Storage Application Management Developer SDK Windows Azure ideal for applications needing Scalability Availability Fault Tolerance This should be a recap as this session will dig deeper into the services.
9
Windows Azure Storage Storage in the Cloud
Scalable, durable, and available Anywhere at anytime access Only pay for what the service uses Exposed via RESTful Web Services Use from Windows Azure Compute Use from anywhere on the internet Various storage abstractions Tables, Blobs, Queues, Drives This should be a recap as this session will dig deeper into the services.
10
Windows Azure Service Architecture
The Internet The Internet via TCP or HTTP LB LB LB Web Site (ASPX, ASMX, WCF) Web Role IIS as Host Storage Queues Worker Service Worker Role Managed Interface Call Slide Objective Understand at a high level how the Windows Azure Platform maps into the high scale archetype Speaker Notes Key points here are that all external connections come through a load balancer THIS INCLUDES STORAGE. If you are familiar with the previous model, you will notice that two new features are diagrammed here as well, namely inter-role communication (notice there is no load balancer) and TCP ports directly to Worker Roles (or Web Roles). We will still use the storage to communicate async and reliably via queues for a lot of options. However, inter-role communication fills in when you need direct synchronous comm. The load balancers are a key to Windows Azure. Tables Blobs Windows Azure Data Center
11
Windows Azure Storage Abstractions
Blobs – Simple named files along with metadata for the file. Tables – Structured storage. A Table is a set of entities; an entity is a set of properties Queues – Reliable storage and delivery of messages for an application Slide Objectives Understand each of the storage types at a high level Speaker Notes The Windows Azure storage services provide storage for binary and text data, messages, and structured data in Windows Azure. The storage services include: The Blob service, for storing binary and text data The Queue service, for storing messages that may be accessed by a client The Table service, for structured storage for non-relational data Windows Azure drives, for mounting an NTFS volume accessible to code running in your Windows Azure service Programmatic access to the Blob, Queue, and Table services is available via the Windows Azure Managed Library and the Windows Azure storage services REST API Notes
12
Blob Storage Concepts Pages/ Blob Blocks Account Container images
PIC01.JPG images Block/Page PIC02.JPG user Slide Objectives Understand the hierarchy of Blob storage Speaker Notes The Blob service provides storage for entities, such as binary files and text files. The REST API for the Blob service exposes two resources: Containers Blobs. A container is a set of blobs; every blob must belong to a container. The Blob service defines two types of blobs: Block blobs, which are optimized for streaming. Page blobs, which are optimized for random read/write operations and which provide the ability to write to a range of bytes in a blob. Blobs can be read by calling the Get Blob operation. A client may read the entire blob, or an arbitrary range of bytes. Block blobs less than or equal to 64 MB in size can be uploaded by calling the Put Blob operation. Block blobs larger than 64 MB must be uploaded as a set of blocks, each of which must be less than or equal to 4 MB in size. Page blobs are created and initialized with a maximum size with a call to Put Blob. To write content to a page blob, you call the Put Page operation. The maximum size currently supported for a page blob is 1 TB. Notes Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system. Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs. The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group. For example, you can enumerate all blobs organized under MyGroup/. Block/Page videos VID1.AVI
13
Table Storage Concepts
Account Table Entity Name =… = … customers Name =… Add= … Slide Objectives Understand Tables Speaker Notes The Table service provides structured storage in the form of tables. The Table service supports a REST API that is compliant with the ADO.NET Data Services REST API. Developers may also use the .NET Client Library for ADO.NET Data Services to access the Table service. Notes user Photo ID =… Date =… photos Photo ID =… Date =…
14
Queue Storage Concepts
Account Queue Message customer ID order ID user order processing Slide Objectives Understand Queues Speaker Notes The Queue service provides reliable, persistent messaging within and between services. The REST API for the Queue service exposes two resources: queues and messages. Notes customer ID order ID
15
Cloud Computing Security
16
Security is the Major Issue
17
Module1:Ensuring Cloud Data Storage
Key Server K1 Msg=Msg-Key Message=Message + Key Server Client K2 Client
19
RC4 Algorithm RC4 is a stream cipher, symmetric key algorithm. The same algorithm is used for both encryption and decryption as the data stream is simply XORed with the generated key sequence. The key stream is completely independent of the plaintext used. Stream cipher is one of the simplest methods of encrypting data where each bit of the data is sequentially encrypted using one bit of the key One bit of Plain text One bit of Ciphering Key m[i] Kc[i] Keystream generator Kc C[i] Ciphering Key One bit of cipher text
20
Steps of RC4 Algorithm The steps for RC4 encryption algorithm is as follows: Get the data to be encrypted and the selected key. Create two string arrays. Initiate one array with numbers from 0 to 255. Fill the other array with the selected key. Randomize the first array depending on the array of the key. Randomize the first array within itself to generate the final key stream. XOR the final key stream with the data to be encrypted to give cipher text.
21
Systematic Randomization
Initial with number From 0 to 255 Fill with chosen key Sbox1 Sbox2 Systematic Randomization Systematic Randomization Final Key Stream XOR Cipher/Plain Text Plain/Cipher Text
22
Module 2:Correctness Verification and Error Localization
Encoding Decoding Client Server
23
CRC Algorithm for Encoding and Decoding
The cyclic redundancy check, or CRC, is a technique for detecting errors in digital data, but not for making corrections when errors are detected. It is used primarily in data transmission. In the CRC method, a certain number of check bits, often called a checksum, are appended to the message being transmitted. The receiver can determine whether or not the check bits agree with the data, to ascertain with a certain degree of probability whether or not an error occurred in transmission. If an error occurred, the receiver sends a “negative acknowledgement” (NAK) back to the sender, requesting that the message be retransmitted.
24
Encoder and decoder for simple cyclic Redundancy Check
26
Module3:Providing Dynamic Data Operation Support
insert update view Server
27
CONCLUSION To ensure the correctness of users’ data in cloud data storage, we proposed an effective and flexible distributed scheme with explicit dynamic data support, including block update, delete, and append. By utilizing the homomorphic token with distributed verification of erasure coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., whenever data corruption has been detected during the storage correctness verification across the distributed servers, we can almost guarantee the simultaneous identification of the misbehaving server(s).
28
THANK YOU FOR YOUR TIME
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.