LFS Series — Private Chains, The Local Layer & dDatabase 10

Peeps
10 min readMar 23, 2021

In the previous two posts, we outlined the all-new dMesh Graph-based DHT and Authenticator DHTs, as well as DHT-based bridging and validation. These new features in the dWeb 3.0 rollout provide a multi-writer distributed database and an authentication framework that can be utilized by applications who want to move beyond the blockchain for data storage and user authentication. This key networking layer of the dWeb is known as the “Remote” layer, although, the dWeb is ultimately made up of four layers: Application, Remote, Local and Storage (). Before diving into private chains, which are a part of the Local layer, I want to explain each of these key layers in-depth, which should help you build a mental model surrounding dWeb’s new ecosystem of tools.

I. Application Layer

The application layer is where the Remote, Local and Storage layers are bridged into a single unified experience. An example of the application layer in action would be something like dBrowser, where data is pushed from a Local private chain to a Remote data layer, like a DHT and/or dWeb’s traditional storage layer (dWebSwarm + dDrive).

II. Remote Layer

The remote layer is typically powered by a DHT of some sort. We have discussed DHTs in-depth in my previous three posts, but it is important to note that DHTs are written to by a user’s Local Layer. In some cases, multiple users will write to the Remote Layer, via their Local Layers, if they are taking part in a multi-party transaction.

III. Local Layer

Each user on the dWeb operates their own private chain, which is essentially an append-only distributed log that can only be written to by that particular user. These append-only distributed logs are known as dDatabases on the dWeb and are used to establish data integrity between peers. Data always originates at the Local Layer and is then written to the Remote Layer. It’s important to note that both the Remote and Local Layers can be accessed by the public. In fact, when validating data, most Remote Layers have to access whatever Local Layer is attempting to write to it, in order to verify data (i.e., when a private chain has data appended to it, and it attempts to write that data to a DHT, that DHT may grab that particular index from the private chain to check the integrity of the original data).

IV. Storage Layer

The storage layer is a distributed file system that has been a part of the dWeb since its 1.0 release. The storage layer allows users to create a file system and share its network address amongst peers on the dWeb, so that entire file systems can be shared amongst a network of peers. This is perfect for sharing the files of a website or a web application.

==== THE LOCAL LAYER & DDATABASE BASICS

So let’s talk about the Local Layer. Private chains are powered by the dDatabase library, as well as a slew of other modules. dDatabases are permanent data structures that can be accessed by any entity on the dWeb, whether it be a peer somewhere in the middle of the desert or a node on the DHT network. dDatabases utilize signed merkle trees to verify the integrity of the feed in real-time, which ultimately allows for the Remote Layer to validate the origins of the data being stored across a neighborhood of nodes. An example of creating a dDatabase and appending data to it, would look something like this:

```

var ddatabase = require(‘ddatabase’)

var feed = ddatabase(‘./feed’, {valueEncoding: ‘binary’})

feed.append(‘Let freedom’)

feed.append(‘stream’, function (err) {

if (err) throw err

feed.get(0, console.log) // prints “Let freedom”

feed.get(1, console.log) // prints “stream”

})

```

You could then replicate this dDatabase from one machine to another, by doing something like this:

```

// On the creator’s computer:

var net = require(‘net’)

var server = net.createServer(function (socket) {

socket.pipe(feed.replicate(false)).pipe(socket)

})

// On the downloader’s computer

var socket = net.connect(…)

socket.pipe(feed.replicate(true)).pipe(socket)

```

The true or false parameters used by the “replicate” method indicate whether or not you’re the initiator of the connection. Typically, the downloader is the initiator, since the creator is the broadcaster of the data itself. It’s important to note that the dSwarm DHT is used to announce peers that are currently “swarming” (peers that are hosting) a dDatabase, where the network address is announced via dSwarm’s DHT and then the peers who end up hosting it are then mentioned as peers under that particular network address via dSwarm’s DHT.

== A dDatabase can be created and replicated using the following options:

- valueEncoding: Choose the encoding of the feed itself (JSON, UTF-8 or Binary)

- sparse — Whether or not to mark the entire feed to be downloaded

- eagerUpdate — Always fetch the latest update that’s advertised

- secretKey — Optionally pass the corresponding secret key

- storeSecretKey — Whether or not to store the secret key

- storageCacheSize — The number of entries to keep in the storage system cache

- onwrite — Optional hook called before data is written

- stats — Collect network-related statistics

- crypto — Optionally use custom cryptography for signatures

- noiseKeyPair — Set a NOISE-based static keypair for exchanging data privately with other peers

== A dDatabase can utilize the following replication options on the “replicate” method:

- live — Whether or not to keep replicating after all remote data has been downloaded

- ack — Set to true to get explicit peer acknowledgement

- download — Whether or not you want to download data from peers

- upload — Whether or not you want to upload data from peers

- encrypted — Whether or not you want to encrypt the data sent using the dDatabase keypair

- noise — Whether or not you want to disable the NOISE handshake completely

- keyPair — A keypair for NOISE authentication

- onauthenticate — An optional hook that can be used to authenticate peers.

The initiator of a replication can choose to download whichever portion of a dDatabase they want, all by index. So for example, if they want `index 2`, they simply request index 2. On the other hand, if they want a range, they simply download the range or if they want the entire dDatabase, they can download it in its entirety. Each index that’s downloaded contains a signature that can prove the validity of that index, up until that point in the chain. This allows any initiator in the replication process to prove that the creator of the dDatabase created the entry that they’re retrieving, since the dDatabase is based around a dWeb network address which ultimately derives from its public key. In other words, every single entry in the dDatabase is signed with the dDatabase’s private key and it can be easily determined that this entry was created by the owner of the dDatabase, since the downloader has access to the discovery key, as well as the public key. For more information on the cryptographic functionality surrounding dDatabase, a recommend checking out @ddatabase/crypto (https://github.com/dwebprotocol/ddatabase-crypto).

An example of obtaining a signature at a particular index, would look something like this:

```

feed.signature(1, () => {

// callback

})

```

This would return:

```

{

index: lastSignedBlock,

signature: Buffer

}

```

An example of verifying a signature for a particular index, would look something like this:

```

feed.verify(1, signature, (err, success) => {

if (err) throw err

return success

})

```

This would return “true” if an error isn’t thrown.

You could also audit all data in the feed, by doing something like this:

```

feed.audit([callback])

```

When done, a report is passed to the callback, that looks something like this:

```

{

valid: 10, // how many data blocks matches the hashes

invalid: 0, // how many did not match the hashes

}

```

==== INTERLINKED DDATABASES

A newcomer in the 3.0 release is a new module called Basestore (https://github.com/dwebprotocol/basestore). Basestore allows developers to create “namespaced” dDatabases, where a “default” dDatabase is created, and its keys are used to derive “sub-bases”, which are separate dDatabases nested within the default base. Essentially, each of these dDatabases are interlinked with one another, where one dDatabase can be “mounted” within another dDatabase. This is especially useful when developers have several different applications that want to utilize the same Basestore or if a single application wants to namespace data across multiple dDatabases. There is also a Basestore networking module (https://github.com/dwebprotocol/basestore-networker) which simplifies the process of replicating multiple dDatabases that are interlinked amongst a swarm of peers.

==== PROPAGATING DATA FROM THE LOCAL LAYER, TO THE REMOTE LAYER

By-design, applications can initially write data to a user’s local dDatabase which is managed by the application itself. The application could then write an entry to the application’s public DHT which references that local entry so that remote users of the application and the DHT nodes that store its data can access and verify the data’s origins. For example, consider the following dDatabase-based entry:

```

INDEX VALUE

======== =================

5 data: hello world

signature: <signature>

```

The signature proves the validity of this entry at this point in the tree and can be verified by anyone when the Merkle Tree is audited by any outsider, even if the entire dDatabase is audited, considering all leaf nodes and hashes will be proven valid all the way up to index 0, as long as all entries were indeed signed by the dDatabase’s private key (as shown in the above dDatabase examples where we retrieved a signature at index, verified the signature and even audited the entire base).

By allowing users to write data to a private chain, before ultimately broadcasting it to a public DHT (from the Local Layer to the Remote Layer), we maintain a single vantage point surrounding a user’s data at the user’s local level, and that very same data that can be viewed from multiple vantage points at the remote level. This also allows users to come to consensus on multi-party transactions, where multiple users create entries in each of their private chains and subsequently broadcast the same transaction details to the DHT where each reference the entry in their own private chains, forming consensus around a specific transaction identifier. DHT nodes could then in fact verify the validity of both local entries, before storing these broadcasted entries at the remote level (on the DHT). This small form of consensus is ultimately enabled by the addition of the local layer, where each individual user is signing for a particular action and then broadcasting it to the DHT. Nodes on the DHT can then verify that the actions were indeed agreed upon by multiple private chains, where one references the other. Obviously, authentication would be apart of this process but this isn’t a mountain for developers to climb, considering Authenticator DHTs are so easy to deploy, as pointed out in my previous post.

==== DDATABASE SINGLE-WRITER ABSTRACTIONS

While dDatabases are great, there are many abstractions like dTrie (a single-writer trie-based key-value store) and dTree (a single-writer binary tree), that can be used, rather than dDatabase directly, since they’re built on top of a dDatabase-based feed. They work just like a dDatabase and inherit all of dDatabase’s functionality with the addition database-like API you’re used to using with key-value stores like LevelDB (put, get, del, etc.). dTrie also has a directed graph that can be used on top of it that enables developers to do a depth-first traversal of keys within the trie itself (https://github.com/dwebprotocol/dwebtrie-multigraph). Creating a dTrie is as easy as doing something like this:

```

const dwebtrie = require(‘dwebtrie’)

const db = dwebtrie(‘./trie.db’, {valueEncoding: ‘json’})

db.put(‘Let freedom’, ‘stream’, function () {

db.get(‘Let freedom’, console.log)

})

```

Using the `dwebtrie(storage, [key], [options])`, you can set many options on the dwebtrie class, as follows:

```

{

feed: aDDatabase, // use this feed instead of loading storage

valueEncoding: ‘json’, // set value encoding

subtype: undefined, // set subtype in the header message at feed.get(0)

alwaysUpdate: true // perform an ifAvailable update prior to every head operation

}

```

The following API methods can be used:

- `get(key, [options], callback)` — Retrieve an entry that has a specific key

- `put(key, value, [options], [callback])` — Create an entry with a key/value

- `del(key, [options], [callback])` — Delete an entry that has a specific key

- `batch(batch, [callback])` — Insert/delete multiple values atomically

The dTree abstraction works similarly. For more information on its API, please refer to its official repository at https://github.com/dwebprotocol/dwebtree.

==== DDATABASE MULTI-WRITER ABSTRACTIONS

The dAppDB library (https://github.com/distributedweb/dappdb), which has been around since dWeb 1.0, is a multi-writer key-value store written on top of a dDatabase. dAppDB uses the exact same API as dTrie but it allows the dAppDB creator to authorize remote peers to replicate their writes to the original dAppDB using their own keys. In order to do this, ultimately the remote peer could write to “db.local” and as long as the original dAppDB has authorized their keys, all writes to db.local, will be replicated to the original dAppDB.

The original creator of a dAppDB could authorize a remote peer to write to their dAppDB, by simply doing something like this:

```

db.authorize(key, [callback])

```

To get the creator of a dAppDB to authorize you, you would probably end up doing something like this:

```

db.on(‘ready’, () => {

console.log(‘Your local key is ‘ + db.local.key.toString(‘hex’))

console.log(‘Tell an owner to authorize it using peersockets or using some other type of extension messaging’)

})

```

Within the `ready` event emitter, you would utilize some sort of extension messaging to send your key (db.local.key) to the dAppDB creator, who could then automatically allow users to write to the DB, as long as they meet certain guidelines, or as long as their data passes certain validation rules. These are things developers would have to build on top of dAppDB itself, as these are circumstances that are custom based on the type of application using dAppDB itself.

Users can easily check if they’re authorized to write to a dAppDB by simply doing something like this:

```

db.authorized(db.local.key, (err, auth) {

if (err) console.log (‘The following authentication error occurred:’, err)

else if ( auth === true ) console.log(‘You are authorized’)

else console.log(‘You are not authorized’)

})

```

This tenth release of dDatabase is full of awesome features, although, its usage as a Local Layer, along with Remote Layers like a DHT, is allowing applications to move beyond the blockchain. To add to that, multi-writer abstractions built on top of dDatabase allow for applications to build beyond a DHT. As a developer, it all comes down to what you’re trying to build and which solution(s) fit best with your architectural design. With dWeb 3.0, there are so many solutions for distributing your data, whether it’s a DHT or a dAppDB, or even private chains like a raw dDatabase, a key-value store like dTrie or a binary-tree like dTree.

That brings us to the Storage Layer, which means we get to talk about the 5th release of dDrive and dWeb’s powerful distributed file system.

======

WHAT’S NEXT?

We’re going to break down dWeb’s “Storage Layer” and the fifth public release of dDrive, dWeb’s distributed file system.

======

--

--

Peeps

The creators of the #dweb and the world’s first decentralized and censorship-resistant social network. We’re about to #DecentralizeEverything.