Running UnLynx

UnLynx can be split in different running instances. The protocols specify individual elements/building blocks that compose our software tool and can be tested independently. The services are the assembly of the different protocols into something that can provide different functionalities. The simulations enable the simulation of each of our services or protocols under specific settings. Finally, the applications use a real deployment of UnLynx to design a set of APIs than can be made available to potential users. There are 4 global parameters in github.com/ldsec/unlynx/lib/constants.go that can be set before executing any of these instances.

  • TIMEOUT - TIMEOUT stores the timeout for node communication and can be changed by setting the CONN_TIMEOUT environment variable.

  • TIME - TIME is true if we want to measure the time of computations.

  • VPARALLELIZE - VPARALLELIZE allows to choose the level of parallelization in the vector computations (>=1).

  • DIFFPRI - DIFFPRI enables the DRO protocol (Distributed Results Obfuscation).

For more information on how to run a protocol/service/simulation/cmd please refer to cothority_template

Protocols

Note: We denote as ciphertexts the encrypted messages. Each protocol instance is run at every conode.

For each protocol, we provide a test (mostly for unit testing) that offers a small overview on how each block operates and ensures that the code executes as expected. Each instance can be directly ran either by using an IDE (e.g., IntelliJ IDEA) or executing go test.

Collective aggregation

The collective aggregation protocol collectively aggregates the local results of a query from all the servers. It uses a tree structure aggregation: 1. the root sends down an aggregation trigger message; 2. the leafs respond with their local result; 3. parent nodes aggregate the information from their children; 4. these nodes forward the aggregation result up the tree.

Input parameters:

  • GroupedData : map[GroupingKey]FilteredResponse - data to be collectively aggregated

  • SimpleData : []CipherText - data to be collectively aggregated (simpler format)

  • Proofs : bool - set to true in order to compute proofs and publish them

You can run the protocol by filling either the SimpleData or GroupedData (but not both, otherwise the protocol will throw an error).

Output parameters:

  • FeedbackChannel : map[GroupingKey]FilteredResponse - the list of collectively aggregated ciphertexts

Deterministic tagging

The distributed deterministic tagging protocol deterministically tags ciphertexts. In other words, the probabilistic ciphertexts are converted to a deterministic tag (identifier). To do this each cothority server (node) removes its secret contribution and homomorphically multiplies the ciphertexts with an ephemeral secret. This protocol operates in a circuit between the servers: the data is sent sequentially through this circuit and each server applies its transformation.

Input parameters:

  • TargetOfSwitch : []CipherText - data to deterministically tag

  • Proofs : bool - set to true in order to compute proofs and publish them

Output parameters:

  • FeedbackChannel : []DeterministCipherText - the list of deterministic ciphertexts (tags)

Key switching

The key switching protocol switches a ciphertext encrypted under a specific key to another ciphertext encrypted under another key. To do this each cothority server (node) removes its secret contribution and homomorphically adds the ciphertexts with a new secret contribution containing the new key. This protocol operates in a circuit between the servers: the data is sent sequentially through this circuit and each server applies its transformation.

Input parameters:

  • TargetOfSwitch : []CipherText - data to key switch

  • TargetPublicKey : kyber.Point - public key to switch to

  • Proofs : bool - set to true in order to compute proofs and publish them

Output parameters:

  • FeedbackChannel : []CipherText - the list of key switched ciphertexts

Shuffling

The shuffling protocol rerandomizes and shuffles a list of ciphertexts. This operates in a circuit between the servers: the data is sent sequentially through this circuit and each server applies its transformation.

Input parameters:

  • TargetOfShuffle : [][]CipherText - data to shuffle

  • Proofs : bool - set to true in order to compute proofs and publish them

Output parameters:

  • FeedbackChannel : [][]CipherText - the list of shuffled ciphertexts

Distributed Results Obfuscation (DRO)

The distruted results obfuscation is a special use of the shuffling protocol that is specially defined for adding random noise values and ensure differential privacy.

The input and output are the same as the shuffling protocol.

Services

The UnLynx service works as described in this image :

The UnLynx service was built with the intention of supporting the sharing of sensitive data in a secure and private way. Our solution starts with a query sent to UnLynx and then broadcast to a number of different data providers. Each of these will respond with homomorphically encrypted data. This information will be encrypted under a collective key (collectively built by the conodes) and then shuffled, thus, preventing any entity from linking back the responses to their respective owners. UnLynx will deterministically compute tags on some of the response fields and then use them to aggregate the remaining sensitive data. The final results are sent back to the querier.

This service comes with a set of test functions that offer the possibility to run this secure sharing tool under specific testing scenarios. These can be carried out either by using an IDE or executing go test.

Check the service (github.com/ldsec/unlynx/services/) and the paper for more details.

Simulations

We can run UnLynx simulations in three different platforms: localhost (local machine), in deterlab (a state-of-the-art scientific computing facility) and ICCluster (an infrastructure that offers computing and storage services to EPFL researchers). Here we show how to run the simulations in your local machine.

For each simulation we have to specify its configuration parameters using the correspondent .toml file. Check github.com/ldsec/unlynx/simul/runfiles for examples.

e.g., the file shuffling.toml allows the configuration of different simulation setting(s) for the shuffling protocol.

If TIME is enabled you can check the time measurements for the computation by looking at the correspondent .csv file in github.com/ldsec/unlynx/simul/test_data.

e.g., the file shuffling.csv stores all the time measurements taken during the execution of a shuffling simulation.

To ease out the task of parsing the time measurements you can simply run github.com/ldsec/unlynx/simul/test_data/time_data/parse_time_data_test.go after setting up the constants filename_read, filename_write, filename_toml.

Localhost

cd <path_to_code_source>/ldsec/unlynx/simul
go build        
./simul runfiles/unlynx.toml

Applications

Each application defines a set of APIs and deployment steps to install UnLynx in multiple conodes. This can either be done locally (for testing purposes), or in any other set of machines, if they are able to communicate among each other.

1. Depending on the architecture of the machine compile the code accordingly for example check ldsec/unlynx/cmd/unlynx/compileLinux.sh or ldsec/unlynx/cmd/unlynx/compileMac.sh on how to do it.

2. Copy the compiled executable and example data files (ldsec/unlynx/data/unlynx_test_data.txt) to each server

NOTE. To generate random data just have a look at:

 ldsec/unlynx/data/handle_data_test.go

3. For each server run a "server setup" command and follow the installation guide

./unlynx server setup

4. Create in your client machine (the one that will act as your client) a group.toml file and append the content of all the public.toml files created during each setup command.

5. Start each UnLynx conode

./unlynx server -c private.toml

6. Run a query, for example:

./unlynx -d 1 -f group.toml -s "{s0, s1}" -w "{w0, 1, w1, 1}" -p "(v0 == v1 && v2 == v3)" -g "{g0, g1, g2}"

What each flag stands for:

  • -d = debug level;

  • -f = group definition file;

  • -s = select attributes;

  • -w = where attributes + values;

  • -p = query predicate;

  • -g = group by attributes

Last updated