"Confidential computing" might seem to refer to homomorphic encryption, but has nothing to do with it in its usage here. After searching around a bit, I suspect that Microsoft Azure first used it in 2017 to refer to code running within a trusted enclave.
It looks to me that while Asylo is agnostic about the specific TEE used, it is primarily targeted at Intel SGX [1]. Instead of having to trust Google to run your code correctly and not read your data, you'd have to trust Intel to manufacture a secure enclave and essentially bake in a private key that cannot be read. You could use the public key to encrypt your code and workload, and it would run in a part of the processor that Google presumably cannot access (or measure [2]).
A good further introduction might be this paper [3] (especially the diagram on page 2), or this answer [4].
I'll repeat my main concern with this system: you will reinforce Intel's position as 'feudal lord' in this model [5].
Within 10 years every mainstream computing hardware environment will have some kind of TEE. They already exist on the iPhone (the SEP), Google's Android phones (the ARM TEE), upcoming high-end ARM processors in general (I think they were calling it "CryptoIsland"?), high-end Intel processors (SGX) and high-end AMD processors (SEV).
The whole point of Asylo is to provide a hardware abstraction layer to make applications portable across different enclaves; it's the opposite of reinforcing Intel's position.
That’s exactly the level of abstraction that we’re looking to provide in Asylo. The parent post linked to the asylo/identity/sgx directory, which contains SGX-specific implementations of some of our higher-level identity abstractions [1]. For instance, “EnclaveAssertionGenerator” defines an interface for generating attestations bound to an enclave’s identity, and “sgx::LocalAssertionGenerator” (an internal construct to our framework) provides that functionality for SGX.
The whole point of Asylo is to provide a hardware abstraction layer to make applications portable across different enclaves; it's the opposite of reinforcing Intel's position.
My main concern is not "reinforcing Intel's position" --- it could just as well be AMD, or ARM, or any other relatively tiny group of hardware manufacturers; the point is that everyone else is giving up and handing the ultimate control over their computing devices and the software they run to a small group, and that is most doubleplus ungood.
> The whole point of Asylo is to provide a hardware abstraction layer to make applications portable across different enclaves; it's the opposite of reinforcing Intel's position.
Yes, I shouldn't have conflated Asylo and SGX (or TEEs in general – where SGX is now dominant for the remote attestation model). The concern was with SGX, maybe even specifically to it being used together with other TEEs for ubiquitous DRM on consumer devices. Asylo could indeed drive competition in the case of remote attestation.
An extensive open source framework for efficient homomorphic encryption would have been so much more exciting and I really hope there will be some kind of breakthrough that reduces the current overhead significantly so it will be more commonly used in the future.. Oh well, at least there is a theoretical foundation (for completely trustless computation) on which we can build on.
Make no mistake: this is nothing more than the old "treacherous computing" that RMS warned about a long time ago, but coming back in new clothes, and is going to be used the most by DRM and other user-hostile applications. They're just trying to sneak it past everyone under the guise of "security" and other ostensibly-somewhat-friendly uses, but don't be fooled.
Many of us on the Asylo team share your reservations about DRM. However, the capability to run software in a not-entirely-trustworthy environment leads to many positive possibilities. For instance, you could imagine a world in which customers didn’t have to trust their cloud vendor or worry about their data falling into unauthorized hands. Or you could implement chat applications which can prove to you that your communications really are being encrypted end-to-end.
In our view, trusted computing has applications well beyond DRM.
The main TEE wikipedia article wasn't very informative for me (about as high level as this blog post). Looking through links off of that brought me to Intel's "Software Guard Extensions" wikipedia[1] article, which actually defines enclaves:
"Intel SGX is a set of central processing unit (CPU) instruction codes from Intel that allows user-level code to allocate private regions of memory, called enclaves, that are protected from processes running at higher privilege levels."
I still don't fully understand the security model of enclaves (for instance, the same wikipedia page also talks about modifying spectre to work against enclaves[2]).
The security model of enclaves is as follows: Enclaves rely on the OS for their resource management/scheduling, however, the OS cannot compromise the enclave.
Is it possible to write applications in languages other than C/Cpp ? Even with Cpp, it appears from the examples that this seems more about handling specific secure data and seem to rely on special data structures. How do we convert existing applications to take advantage of Asylo ? Does it involve moving the sensitive parts to a enclave app and communicating with it from normal one ?
We started with C/C++ mostly because that's what we need for the bottom layer of a POSIX-like stack. For instance, to use OpenSSL we need to be able to build C and if we wanted to support, for instance, Python we would need to build its C language components. Some of the applications we want to target include Redis and SQLite and, again, there we need C and broad support for POSIX APIs. Ideally, those applications would build for Asylo out of the box, but we have some work to do to get that to work.
Going forward, we are very interested in broader language support. For instance, we are currently working on support for Go. Rust would also be interesting because it (potentially) offers an orthogonal set of security guarantees at compile time.
Makes sense. Eagerly waiting for NodeJS and Mono to be recompiled using Asylo. Do you foresee additional complexity in supporting managed languages ? I'm guessing a mono program with base recompiled to Asylo would be more secure than using SecureString in C#.
The Asylo framework has partial support for POSIX APIs and system calls. Each programming language implementation that implicitly depends on system calls in either its generated code or runtime environment will need to be inspected and tested. Languages that depend on unimplemented system calls or POSIX APIs to provide basic functionality will pose some challenge, depending on just which system it needs. If a runtime forwards calls to non-crucial system calls that Asylo does not currently support, then Asylo would need extending to satisfy the linker with at least a stub implementation that calls abort().
Asylo does provide support for basic I/O, sockets, and threads, so basic language functionality within Asylo should not be a significant challenge. We welcome any pull requests you might have to support your favorite language.
Developers using Asylo still must be concerned about writing buggy code. If you write past the end of a buffer with user data passed into the enclave, that code is still vulnerable. We haven’t fixed that part of the software development process.
The release today is just a start. We are looking at supporting additional languages and toolchains. Future releases/community contributions should also bring richer POSIX support.
As to refactoring--it is really up to the developer. With sufficient POSIX support, an entire POSIX-compliant app can live inside an enclave. On the other hand, for security reasons, the developer may decide to refactor their application.
It would be helpful if the doc made it clear which enclaves are supported, and how you get your code onto them. While reading several of the pages I couldn't work out if this is x86 specific, supports ARM etc too, and how you would even get hello world into the enclave in the first place.
Thanks for the helpful feedback. To answer your question: Asylo is currently x86 specific and provides a simulated enclave backend. We plan on evaluating additional enclave technologies going forward, with the goal of supporting those which gain the most market traction and community support.
Obvious disclaimer: currently working at Google on Asylo.
This is really promising. The use of enclave is strongly chained to its Hardware. Having a Framework with a plugin-like architecture definitely helps. I may be wrong, but I have the impression that the development of TEE within Virtual Machines and Containers is still in its early stages. I am looking forward to see how Asylo will help on this.
Asylo is not tied to EPID; the framework aims to abstract away any unique behavior specific to TEE implementations, and provide a common backend interface that developers can code against. The goal is to allow developers to easily migrate their apps between backends with little to no source-code changes.
Specifically for attestation purposes, Asylo defines the EnclaveAssertionGenerator[1] and EnclaveAssertionVerifier[2] interfaces; these will need technology-specific implementations.
In this initial release we only support a simulated backend, for experimental development. We'll continue looking into specific TEE technologies going forward.
The name doesn’t inspire confidence to me. Too close to Asylum, but I guess they’re going for “a silo”.
It's just my opinion. I know the meaning of the word Asylum, but as I explained below...it's the association that I get from it. It's like using the word Niggardly - even though the definition is not related to race, people don't use it because it just sounds wrong.
It looks to me that while Asylo is agnostic about the specific TEE used, it is primarily targeted at Intel SGX [1]. Instead of having to trust Google to run your code correctly and not read your data, you'd have to trust Intel to manufacture a secure enclave and essentially bake in a private key that cannot be read. You could use the public key to encrypt your code and workload, and it would run in a part of the processor that Google presumably cannot access (or measure [2]).
A good further introduction might be this paper [3] (especially the diagram on page 2), or this answer [4].
I'll repeat my main concern with this system: you will reinforce Intel's position as 'feudal lord' in this model [5].
[1] https://github.com/google/asylo/tree/master/asylo/identity/s...
[2] https://arxiv.org/abs/1702.08719
[3] https://eprint.iacr.org/2016/086.pdf
[4] https://security.stackexchange.com/questions/175749/what-are...
[5] https://news.ycombinator.com/item?id=15936121