Yesterday, we kicked off the Let Freedom Stream (LFS) series where I described dMesh’s all-new Graph-based DHT library, which allows us to traverse any dMesh DHT almost effortlessly for the data our applications rely upon. While it’s quite amazing that we’re able to do this on a peer-to-peer basis, it’s also important that we validate those who are creating these sorts of entries on an application’s DHT or else we would never be able to trust the data being distributed across them. For example, I could create a dSocial post for the @jared user by simply creating an entry with the key “jared/posts/500” and then creating a key/value entry for “500” that includes the post data within the value object (see the previous post). I think we can all agree that this would be a complete and total disaster, considering the fact that anyone could create this entry. In other words, anyone could be me. This is where DHT-based authentication and validation come into play.
As I described previously, a DHT is just a distributed key-value store and it should come as no surprise that it can be used to store user data. Recently we began working on what we call “Authenticator DHTs”, which are DHTs specifically formatted for usernames, permission levels and the keypairs associated with them, just as dSwarm’s DHT (https://github.com/dwebprotocol/dht) is specifically formatted for network addresses and the peers that are “swarming” network addresses (the peers who have the data associated with them), like the network address of a dDrive. Try to imagine a DHT that contains data that is formatted as follows:
This DHT is quite simple and of course, like any DHT, its entries are spread across the peers who are a part of the DHT. In this case, it would be the users who exist within it. If there is 1 million users, there would be 1 million entries, which means each user would probably host or maintain 2–5 of those entries on their machine for a decent amount of redundancy. In fact, redundancy on an Authenticator DHT is quite important, considering data loss would be a disaster as well (users would simply disappear, which would allow hackers to replace those entries with compromised keys). Each user entry must include both the “owner” and “active” permission levels, each of which must be associated with a DWEB public key (a key with the DWEB prefix). Since these entries are mutable, when they are initially created, they are signed by a private key which in-turn MUST be the private key associated with the user’s “owner” permission level that is a part of the entry itself. This key is then required to edit these entries in the future. For example, let’s say you wanted to update your “active” permission level, you would need to sign that mutation with the private key that’s associated with the record’s “owner” public key. Sadly, the “owner” public key cannot be changed but users can be completely removed and re-added if need be. As is the case with editing a mutable entry, removing a mutable entry will require a signature from the “owner” private key. Of course, there is no data loss in this process, as the data created by a user via other applications is not stored in the Authenticator DHT. This separation is actually brilliant in many ways but we’ll have to get into that a bit later in the post when we speak on DHT bridging. The important thing to understand is how a DHT like this stores user-related data and how that data is mutated by the entry creators (the actual user), regardless of where those entries are stored.
An Authenticator DHT, like any DHT, has a set of validation rules that entries must follow or the nodes on the DHT that attempt to store this data will simply reject it (this could be considered a form of consensus on a small-basis). In the above example, I spoke on the “owner” private key having to sign for mutation of an entry, which is a simple example of DHT-based validation at work. Simply put, the validation process would go something like this when creating an entry on an Authenticator DHT:
1. The DHT will validate whether or not the “owner” and “active” fields on the “user object” have public keys or if they even exist. If not, it will reject the entry since each entry must have an owner and active key.
2. It then checks the validity of both DWEB public keys. If they’re not valid keys, it rejects the entry.
3. It then checks the Mutable entry for a signature and ensures that the signature was made by the private key, that associated with the public key that exists in the user object’s “owner” field.
4. If so, the entry is propagated to specific nodes on the DHT and stored.
NOTE: Each node goes through this same validation process before storing data. Since data is only stored on a small number of nodes on the network, this should never be considered network-wide consensus, but if 4–5 nodes can agree that the data is valid, according to a DHT’s configured data physics, then the data should be ok. It would be pretty hard for 4–5 nodes to hijack this process and ultimately find themselves as receivers for a specific piece of data, due to the way data is autonomously spread across a Kademilia DHT.
It’s important to note that while this is the validation rules that are in place for an Authenticator DHT, you could write your own DHT for your own applications, each of which have their own validation rules. In fact, this is how DHT’s like dSocial’s DHT can validate whether the user creating a post, is in-fact an actual dSocial user. To explain this, it’s important to note that dSocial uses its own DHT for authentication data (an Authenticator DHT) and another DHT for storing application-related data (a custom DHT). When a user creates a dSocial account, they select a username, keys are generated for them and that user entry is propagated to the Authenticator DHT. When using the dSocial application, whenever that given user executes an action (post, reply, love, repost, etc.), dSocial’s regular DHT validates entries based on what they are for, by querying the Authenticator DHT and validating the signature included with an entry to ensure that the data is being stored by the actual user referenced in the Authenticator DHT. This process would work as follows:
dSocial Application <=> dSocial DHT <=> dSocial Authenticator DHT
1. A user attempts to login to dSocial by verifying the keys associated with a specific username/permission level on dSocial’s Authenticator DHT. If verified, dSocial organizes the front-end UI for that particular user.
2. The @jared user attempts to “love” a post. When doing so, dSocial’s front-end attempts to create a mutable entry on dSocial’s DHT for the @jared user’s love action, by creating the following entries:
- /500/loves/jared => null
- /jared/loves/500 => null
3. Per the DHT’s validation rules, the @jared user must sign the mutable entry with his “owner” private key, matching the “owner” public key for the @jared user on the dSocial Authenticator DHT. This is possible since the dSocial DHT is designed in a way where a username is included in an entry key (i.e. /500/loves/jared or /jared/loves/500) and dSocial’s DHT is able to perform a lookup for that user on dSocial’s Authenticator DHT to ensure that the mutable entry is signed with a private key that matches the @jared user’s published “owner” level public key.
This sort of behavior is made possible by what we call DHT Bridging, where two or more DHTs, like dSocial’s DHT and dSocial’s Authenticator DHT, can exchange information between each other. DHT Bridging and DHT-based validation go hand-in-hand, considering validation can occur at a higher-level when one DHT can use the data distributed via a totally separate DHT within its own validation processes. In some ways this can be very efficient, like in the case of user authentication and in other cases, this can cause performance issues — so it’s important to design a DHT Bridge in a way that enhances the performance of your application.
So how would one design a DHT Bridge and how do you design a DHT to handle a specific sort of data schema or a set of validation rules? Creating a DHT Bridge is quite easy, especially with dMesh’s core DHT software, which allows for the creation of custom commands and validation rules on top of it. One DHT can be designed to query another, just like any software can easily query a remote DHT for records. One DHT can be designed to do a depth-first traversal or lookup single entries both via a remote DHT or set of remote DHTs, by simply using dMesh DHT’s Node-based API. dMesh’s core DHT can also be customized to include a custom set of validation rules on top of the already existing functionality the DHT provides, just as we’re currently doing with dSocial. In fact, these validation rules could be added to the “put” API, so that entries must pass a set of specific rules, before being propagated to the DHT. In this way, custom DHTs can be created on top of dMesh’s core DHT software quite easily, which will eventually equate to all kinds of DHTs that can be used to handle all kinds of different types of data.
Ultimately, it makes more sense for distributed applications to have their own dedicated distributed database that’s built to handle its own complex data schemas, as well as their own user database and authentication layer(s). While this may make applications a bit more “fat” than those that are built on blockchains, they are theoretically more distributed and decentralized. Also, since it isn’t possible for a single blockchain to host the world’s data, most applications would be forced to run their own blockchains anyway. Considering how massive blockchains are and how hard it is to decentralize a blockchain through community adoption, it makes more sense for an app to use its own lightweight DHT for data distribution as well as user authentication, considering the DHT can be built into the application itself, which will allow for it to naturally grow with its user base, without having to worry about convincing people to download and install a massive blockchain application.
While dMesh’s core DHT software is certainly in its infancy, over the coming months and years, I do believe that distributed and decentralized applications will use similar solutions, instead of bloated blockchain platforms, due to the performance boost it provides, its ability to scale with an application organically, and with the elimination of blockchain-related utility fees for storage and compute resources. Considering dMesh-based Authenticator DHTs can be used as a DPKI (Decentralized Public Key Infrastructure), applications no longer have to rely on a blockchain for DPKI, and can instead handle their own authentication procedures. Lastly, since all of these DHTs can talk, we can ensure that the data being stored on our DHTs is validated, that there is some form of consensus amongst the nodes that actually store the data on these networks and that data integrity exists for the applications that rely on these DHTs. With that said, blockchains should slowly become a thing of the past, especially as DHTs begin to appear in different shapes and sizes.
That brings us to the next post in the series, where we’re going to break down the 10th release of dDatabase (dDatabaseX), as well as private chains.
We’re going to break down the 10th public release of dDatabase (dDatabaseX) and the launch of private chains.