Skip to content

Commit

Permalink
Update islet project readmes for /Samsung/ -> /islet-project/ move
Browse files Browse the repository at this point in the history
Also cleanup trailing whitespaces.

Signed-off-by: Lukasz Pawelczyk <l.pawelczyk@samsung.com>
  • Loading branch information
Havner authored and bitboom committed Jan 22, 2024
1 parent 2dc4062 commit 1ddfc4c
Show file tree
Hide file tree
Showing 7 changed files with 49 additions and 50 deletions.
64 changes: 32 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
<p align="center"><img src="https://github.com/Samsung/islet/blob/main/doc/res/logo-title.jpg?raw=true" height="100px"></p>
<p align="center"><img src="https://github.com/islet-project/islet/blob/main/doc/res/logo-title.jpg?raw=true" height="100px"></p>

Islet is an open-source software project written in Rust that enables confidential computing
Islet is an open-source software project written in Rust that enables confidential computing
on ARM architecture devices using the ARMv9 CCA.
The primary objective of Islet is to enable on-device confidential computing
and protect user privacy on end user devices.
The primary objective of Islet is to enable on-device confidential computing
and protect user privacy on end user devices.

While current confidential computing solutions mainly focus on server-side
protection, it is equally important to safeguard user information at the user
While current confidential computing solutions mainly focus on server-side
protection, it is equally important to safeguard user information at the user
device level since that is where private data collection initially occurs.
Furthermore, as more and more users rely on privacy apps such as private
messengers, secure emails, password managers, and web browsers with privacy
Furthermore, as more and more users rely on privacy apps such as private
messengers, secure emails, password managers, and web browsers with privacy
settings, there is a growing need to ensure privacy on user devices.
Islet, an open-source project, addresses this need by providing a platform
for ARM-based confidential computing.
for ARM-based confidential computing.

Enabling CC on user devices will not only establish end-to-end CC throughout
the entire data processing path,
but it will also help create a secure computation model
that enables processing of user private data on the user device
using the same components that previously were employed at the server side
without disclosing business logic.
Furthermore, on-device confidential computing will be a key enabler for
Enabling CC on user devices will not only establish end-to-end CC throughout
the entire data processing path,
but it will also help create a secure computation model
that enables processing of user private data on the user device
using the same components that previously were employed at the server side
without disclosing business logic.
Furthermore, on-device confidential computing will be a key enabler for
machine-to-machine computing without the need for server intervention

## Feature Overview
Expand All @@ -31,38 +31,38 @@ machine-to-machine computing without the need for server intervention

## Overall Architecture

Islet provides a platform for running virtual machines (VMs)
confidentially, with standard SDKs for easy integration with other confidential
computing frameworks at upper layers.
The platform consists of two key components:
the Islet Realm Management Monitor (Islet-RMM) and Islet Hardware Enforced Security (Islet-HES).
Islet provides a platform for running virtual machines (VMs)
confidentially, with standard SDKs for easy integration with other confidential
computing frameworks at upper layers.
The platform consists of two key components:
the Islet Realm Management Monitor (Islet-RMM) and Islet Hardware Enforced Security (Islet-HES).

- `Islet RMM` operates at EL2 in the Realm world on the application processor cores
and manages the confidential VMs, known as realms.
- On the other hand, `Islet HES` performs device boot measurement, generates
platform attestation reports, and manages sealing key functionality within a secure
- `Islet RMM` operates at EL2 in the Realm world on the application processor cores
and manages the confidential VMs, known as realms.
- On the other hand, `Islet HES` performs device boot measurement, generates
platform attestation reports, and manages sealing key functionality within a secure
hardware IP apart from the main application processor.

![islet-overview](doc/res/overview.png)

In designing Islet, we aim to to address the current security challenges in confidential
In designing Islet, we aim to to address the current security challenges in confidential
computing technologies right from the very beginning.
To ensure that our software is built with safety in mind, we have chosen to use the
Rust programming language, known for its unique security model that ensures memory
safety and concurrency safety.
Moving forward, we also plan to incorporate formal
To ensure that our software is built with safety in mind, we have chosen to use the
Rust programming language, known for its unique security model that ensures memory
safety and concurrency safety.
Moving forward, we also plan to incorporate formal
verification techniques to further enhance the security of our design and implementation.

For more information, please visit our [developer site](https://islet-project.github.io/islet/).

## A demo video (Confidential ML)

![this page](https://github.com/Samsung/islet/raw/main/examples/confidential-ml/video/confidential_ml.gif)
![this page](https://github.com/islet-project/islet/raw/main/examples/confidential-ml/video/confidential_ml.gif)

- This video shows how ISLET achieves an end-to-end confidential machine learning with a chat-bot scenario.
- This video flows as follows.
1. It starts with a slide that describes all components involved in this demo. All components will run on confidential computing platforms.
2. (*feed an ML model*) The model provider feeds the ML model into the ML server. This is done through a secure channel established with the aid of the certifier framework.
3. (*run a coding assistant*) A mobile device user asks a chat-bot application that runs on ISLET for generating a function. And then, that request is passed on to the ML server through a secure channel. Finally, the user can see the result (i.e., function).
4. (*launch a malicious server*) This time, we launch a malicious server to show a failure case. When it attempts to join the certifier service (on the right side of the screen), it will not pass authentication as it results in a different measurement. Therefore, the malicious server cannot interact with the mobile device user in the first place.
- To download this video, click [here](https://github.com/Samsung/islet/raw/main/examples/confidential-ml/video/confidential_ml.mp4).
- To download this video, click [here](https://github.com/islet-project/islet/raw/main/examples/confidential-ml/video/confidential_ml.mp4).
3 changes: 1 addition & 2 deletions doc/getting-started/app-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Sealing result Ok(())
## Example code snippet
Below is code snippet of the example.
You can refer [the whole example code](https://github.com/Samsung/islet/blob/main/sdk/examples/simulated.rs).
You can refer [the whole example code](https://github.com/islet-project/islet/blob/main/sdk/examples/simulated.rs).
```rust
use islet_sdk::prelude::*;
Expand All @@ -60,4 +60,3 @@ let sealed = seal(plaintext)?;
let unsealed = unseal(&sealed)?;
assert_eq!(plaintext, &unsealed[..]);
```
6 changes: 3 additions & 3 deletions doc/getting-started/plat-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Platform components include from Realm Management Monitor(RMM) to Realm.

`Islet` provides Rust-based RMM and scripts to compose Confidential Computing Platform.
You can explore CCA platform with our scripts and
powerful [third-party projects](https://github.com/Samsung/islet/tree/main/third-party).
powerful [third-party projects](https://github.com/islet-project/islet/tree/main/third-party).

## Setting build environment

Expand Down Expand Up @@ -39,8 +39,8 @@ $ LD_LIBRARY_PATH=./ ./sdk-example-c
```

## Running a linux realm with a networking support and prebuilt examples
See [examples](https://github.com/Samsung/islet/tree/main/examples).
To get details about its network configuration, see [network.md](https://github.com/Samsung/islet/blob/main/doc/network.md)
See [examples](https://github.com/islet-project/islet/tree/main/examples).
To get details about its network configuration, see [network.md](https://github.com/islet-project/islet/blob/main/doc/network.md)

## Testing the realm features
```bash
Expand Down
2 changes: 1 addition & 1 deletion doc/usecases/confidential_ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,4 @@ We see that this attack could be addressed to some extent by designing and imple
## Time to play around with real examples

Anyone can try out what we've explained so far, that is to say, running traditional ML or federated learning on top of ISLET with simple ML models.
Check out [this markdown file](https://github.com/Samsung/islet/tree/main/examples/confidential-ml) to play around with ISLET for confidential ML!
Check out [this markdown file](https://github.com/islet-project/islet/tree/main/examples/confidential-ml) to play around with ISLET for confidential ML!
8 changes: 4 additions & 4 deletions examples/confidential-ml/code_model.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Try out confidential code generation in ML setting

This section explains how to try out confidential code generation in ML setting. (For this model, FL is not supported)
We prepare [a docker image](https://github.com/Samsung/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz) that contains everything needed to try out this example and it involves 4 different instances-- *certifier-service*, *runtime*, *model-provider*, *device*-- meaning that you need to open 4 terminals for each of them.
We prepare [a docker image](https://github.com/islet-project/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz) that contains everything needed to try out this example and it involves 4 different instances-- *certifier-service*, *runtime*, *model-provider*, *device*-- meaning that you need to open 4 terminals for each of them.

In this example, *device* is not involved in ML operations (inference and training), they just pass user-input on to *runtime* and then *runtime* does inference with the code model and give the result(code) back to *device*.
The code model is a pre-trained model and *runtime* will not do training with user-input. This is the way that most state-of-the-art chatbots work these days.
Expand All @@ -16,7 +16,7 @@ the quality of the output might be low. See [this csv file](./model_provider/cod
Before trying this example, please do the following first to import and run a docker image.
(Note that this docker image is based on Ubuntu 22.04)
```
$ wget https://github.com/Samsung/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz
$ wget https://github.com/islet-project/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz
$ gzip -d cca_ubuntu_release.tar.gz
$ cat cca_ubuntu_release.tar | sudo docker import - cca_release:latest
$ sudo docker run --net=host -it -d --name=cca_ubuntu_release cca_release /bin/bash
Expand Down Expand Up @@ -165,7 +165,7 @@ $ <browser> open a browser and go in http://localhost:8000
$ <browser> type a request in the chatbox, such as "write a function to add two numbers",
and then device(terminal-5) passes the request on to the runtime(terminal-3),
and eventually, the chatbot in the browser will show the prediction (code) made by runtime.
Here is the code:
int min(int a, int b) {
return a > b ? b : a;
Expand Down Expand Up @@ -223,7 +223,7 @@ $ <browser> open a browser and go in http://193.168.10.15:8000
$ <browser> type a request in the chatbox, such as "write a function to add two numbers",
and then device(terminal-5) passes the request on to the runtime(terminal-3),
and eventually, the chatbot in the browser will show the prediction (code) made by runtime.
Here is the code:
int min(int a, int b) {
return a > b ? b : a;
Expand Down
12 changes: 6 additions & 6 deletions examples/confidential-ml/word_model.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Try out confidential word prediction in ML setting

This section explains how to try out confidential word prediction in ML setting.
We prepare [a docker image](https://github.com/Samsung/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz) that contains everything needed to try out this example and it involves 5 different instances-- *certifier-service*, *runtime*, *model-provider*, *device1*, *device2*-- meaning that you need to open 5 terminals for each of them.
We prepare [a docker image](https://github.com/islet-project/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz) that contains everything needed to try out this example and it involves 5 different instances-- *certifier-service*, *runtime*, *model-provider*, *device1*, *device2*-- meaning that you need to open 5 terminals for each of them.

[TODO] Note that as of now we do not offer any convenient way to try out this example in your host machine directly instead of the docker image, as this example involves a lot of dependencies. Anyhow, we plan to support building and testing this example on the host PC in the near future.

Expand Down Expand Up @@ -35,7 +35,7 @@ $ <terminal-4: device1> ./run.sh 0.0.0.0 8125 word 0
Prediction: abou{ # this is an initial prediction as a result of on-device inference
Type correct answer: about # provide a correct answer for training
...
... # sends "about" to runtime. runtime does training with this data and sends a newly trained model to this device.
... # sends "about" to runtime. runtime does training with this data and sends a newly trained model to this device.
...
Type characters: abo # type in "abo" again and see if it leads to "about" which is a correct word.
Prediction: about # shows a correct guess-!
Expand Down Expand Up @@ -107,7 +107,7 @@ $ <terminal-4: device1> ./run.sh 192.168.33.1 8125 word 0
Prediction: abou{ # this is an initial prediction as a result of on-device inference
Type correct answer: about # provide a correct answer for training
...
... # sends "about" to runtime. runtime does training with this data and sends a newly trained model to this device.
... # sends "about" to runtime. runtime does training with this data and sends a newly trained model to this device.
...
Type characters: abo # type in "abo" again and see if it leads to "about" which is a correct word.
Prediction: about # shows a correct guess-!
Expand All @@ -116,7 +116,7 @@ $ <terminal-4: device1> ./run.sh 192.168.33.1 8125 word 0
## Try out confidential word prediction in FL setting

This section explains how to try out confidential word prediction in FL setting. We make a simple word prediction model that is based on SimpleRNN of TensorFlow.
We prepare [a docker image](https://github.com/Samsung/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz) that contains everything needed to try out this example and it involves 5 different instances-- *certifier-service*, *runtime*, *model-provider*, *device1*, *device2*-- meaning that you need to open 5 terminals for each of them.s
We prepare [a docker image](https://github.com/islet-project/islet/releases/download/example-confidential-ml-v1.1/cca_ubuntu_release.tar.gz) that contains everything needed to try out this example and it involves 5 different instances-- *certifier-service*, *runtime*, *model-provider*, *device1*, *device2*-- meaning that you need to open 5 terminals for each of them.s

[TODO] Note that as of now we do not offer any convenient way to try out this example in your host machine directly instead of the docker image, as this example involves a lot of dependencies. Anyhow, we plan to support building and testing this example on the host PC in the near future.

Expand Down Expand Up @@ -149,7 +149,7 @@ $ <terminal-4: device1> ./run.sh 0.0.0.0 8125 word 1
Type correct answer: about # provide a correct answer for training
...
epoch: 90, loss: 0.000,0.000 # do on-device training
... # wait for a global model from runtime
... # wait for a global model from runtime
...
$ <terminal-5: device2> cd /islet/examples/confidential-ml/device
Expand Down Expand Up @@ -224,4 +224,4 @@ $ <terminal-5: device2> cd /shared/examples/confidential-ml/device
$ <terminal-5: device2> ./init.sh 192.168.33.1
$ <terminal-5: device2> ./run.sh 192.168.33.1 8126 word 1
# test it the same way we did with "How to test with simulated enclave (no actual hardware TEE) on x86_64"
``
``
4 changes: 2 additions & 2 deletions third-party/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Third-party projects

ISLET uses several third-party projects for realm, normal-world and testing.
Third-party projects are managed using `submodule`
which means they are forked from upstream to the branch of [ISLET-ASSET repo](https://github.com/Samsung/islet-asset).
Third-party projects are managed using `submodule`
which means they are forked from upstream to the branch of [ISLET-ASSET repo](https://github.com/islet-project/assets).

0 comments on commit 1ddfc4c

Please sign in to comment.