Data Gateway

“Data Gateway” ~ reads data from an Aerosense receiver and shoves it into the cloud.

Get Started Quick

Data Flow

The data flow from the aerosense sensor modules looks like this:

Node (edge processor on-blade)
->  Base Station (bluetooth equipment on-tower)
--->  Gateway (data manager and uploader on-tower)
----->  Ingress (Cloud Function to receive data on-cloud)
------->  Google Cloud BigQuery + Google Cloud Store (database / object storage system)
    |---->  Digital Twin (data analysis system)
    |---->  Jupyter Notebooks (data analysis/introspection for researchers)
    |---->  Dashboard (data visualisation for researchers and system installers)

A Node streams data to the Base Station via bluetooth. The Base Station writes the bytestream directly to a serial port. The Gateway (this library) reads the bytestream from the serial port, decodes it and buffers it in local storage. The Gateway then is responsible for:

  • establishing a connection (websocket) to Ingress and writing the buffered data, or

  • packaging the data into events and files which are posted to Ingress.

The Gateway is also responsible for managing the buffer and local storage to minimise data loss in the event of internet outages.

The code for the Cloud Function Ingress is also included in this repository.

Deployment

To deploy the Aerosense system, you must:

  • Install gateway code to the raspberry pi in the base station

  • Install and connect the physical base station

  • Configure the gateway

  • Register this installation to the aerosense database

Go through the following steps:

  1. Install using balena (strongly recommended) or manually.

  2. Configure the gateway for the first time.

  3. Register your installation.

  4. Check the gateway.

Once complete, move on to using the gateway.

Using Balena

A fleet of devices (under the aerosense organisation on BalenaCloud) is managed by Balena, and this is by far the preferred way of doing things.

Balena manages device installation, health and continuous deployment of code (whenever the main repository is updated, a fleet-wide update is triggered). It also works as a portal to the device, allowing you to log in and view device status as well as opening a terminal shell to individual devices for test and diagnostic purposes.

You can join the balena organisation with your github account.

Installation with balena

With a fresh new raspberry pi (or an old one with a wiped SSD card!) you’ll want to install the gateway code. You do this by Adding a Device.

Tip

Technically, the “device” is tied to the SSD card, not the raspberry pi itself. SSDs are notorious for failing after a short cycle time, so it’s well worth buying high quality SanDisk SSD cards rather than the cheap equivalents, or you may find yourself dealing with a failure in the field.

Add a device

Follow the balenaCloud instructions to install balena on the SSD card and add it to the gateways fleet (in the aerosense organization):

_images/adding-a-device.png

Hit ‘add a device’ and balena will take you through the process of installing and deploying.

Once added, the device will appear in your balena dashboard with a coolname, like fried-firefly or holy-lake.

Labelmaking is the way

Steal the labelmaker from PBL, and label your raspberry pi so you know which one is which.

Once added, you can follow the instructions to configure your device.

Balena Organization Admins

Use your github account to register with balena. Existing admins for the aerosense organisation can log into BalenaCloud then invite you.

_images/current-admins.png

The current list of admins in balena.

Manual Installation

Note

Once installed, you’ll need to configure with a service account; follow the setup steps using your own terminal and system environment variables.

Warning

It’s possible to manually install gateway code to any machine (eg your own laptop) but by far the easiest way, even for development purposes in the lab, is to use balena to deploy the code straight to a raspberry pi.

Installing manually (on a Raspberry Pi 4)

Although data-gateway can be run on a wide range of hardware, it’s generally aimed at being run on a Raspberry Pi on board a turbine (in the base station box on the tower).

It’s anticipated that you’re using:
  • Raspberry Pi 4

  • With at least 2GB ram

  • Python >= 3.8

You’ll need to install Raspberry Pi OS (formerly “Raspbian”, which was a much better name) onto your pi. Use the current instructions from raspberrypi.org, and follow their setup guides.

When booted into your pi, use the following commands to install…

sudo apt-get update
sudo apt-get install libhdf5-dev libhdf5-serial-dev

git clone https://github.com/aerosense-ai/data-gateway.git
cd data-gateway
pip install -r requirements-pi.txt

This installs the CLI Gateway CLI, which enables you to start the gateway.

Installing on Other Hardware

There’s no reason data-gateway can’t be run on a wide range of hardware, or your own development laptop in the lab.

However, we’ve only tested it for the Raspberry Pi 4, which has a quad-core processor and is unix-based.

The main consideration when choosing other hardware is that a dual-core CPU is probably a conservative choice: data-gateway uses three processes and multiple threads. Additional vCPUs will always reduce the likelihood of the packet reader being blocked, improving stability of the system. In reality these processes are both sufficiently lightweight that they’d probably be just fine on a single core, but we haven’t tested that, so please run extensive tests prior to field deployment if you go down this route!

Installation for developers

If you’re developing data-gateway code itself you’ll need to follow the instructions for developers in the repo’s README.md file.

Configuration

Note

The majority of users will use Balena, so we’ll give examples in balena here. Configuring in a manually installed environment is broadly the same.

Add a service account

Using the name generated by balena when the device was added, create a dedicated service account by following these instructions.

Once you have the json file containing credentials on your computer, select your new device from the fleet and open a terminal (on main, not on the host OS) and do:

nano $GOOGLE_APPLICATION_CREDENTIALS

Then paste the contents of the credentials file. Press ctrl-x then y to save and exit. This file will persist over reboots of the device; you shouldn’t need to touch it again.

_images/opening-a-terminal.png

Open a terminal to the ‘main’ target and you can shell into the machine directly.

Add routine and configuration files

We’ll start with an empty routine file, which you can change later (see Routine files), and a basic configuration file.

mkdir -p /data/routines && echo "[]" > /data/routines/my-routine.json
mkdir -p /data/configurations && nano /data/configurations/my-configuration.json

Then paste in the following JSON (a basic configuration), update the values and save it:

{
  "gateway": {
    "installation_reference": "my-installation-reference", // change this to a meaningful value, eg "aventa-initial-deployment"
    "latitude": 0,
    "longitude": 0,
    "receiver_firmware_version": "unknown"
  },
  "nodes": {
    "1": {},
    "2": {},
    "3": {},
    "4": {},
    "5": {}  // Remove nodes if you know you don't need them
  },
  "measurement_campaign": {
    "label": "test-campaign",
    "description": "This field can be used to label and describe different measurement campaigns"
  }
}

Warning

You can store as many different configurations and routines as you want, but you should always save them somewhere in the /data/ folder, because it persists over restarts.

Set device variables

When you run the gateway, you’ll be able to able to specify a routine file and a configuration file. However, that makes for a lot of typing, especially when you’re trying to debug things.

To ease frustration, the best thing to do is to set these values as environment variables. Go to the “Device Variables” tab and add two variables, GATEWAY_ROUTINE_FILE and GATEWAY_CONFIG_FILE, whose values match the paths you set for the two files above.

_images/device-variables.png

Set device variables to change the default GATEWAY_ROUTINE_FILE and GATEWAY_CONFIG_FILE.

Check the installation

In the balena (or your own, for a manual installation) terminal, check by typing:

gateway --help
_images/gateway-help.png

If the gateway is correctly installed, you should see this.

Tip

You can always use the $GATEWAY_CONFIG_FILE or $GATEWAY_ROUTINE_FILE to see the paths and their contents:

echo The config file is at $GATEWAY_CONFIG_FILE and it contains...
cat $GATEWAY_CONFIG_FILE

Registration

When installing the physical Aerosense system onto one or more turbines (typically the turbines in a particular wind farm), you need to register the installation in the aerosense database.

For experimental and test purposes, there will generally be only one turbine (and therefore one gateway) per deployment.

Either way, once you’ve installed and configured the gateway, you need to register the installation.

Tip

Make sure you’ve chosen a sensible value for the installation_reference value in your configuration file. This should be unique to your installation (an error will occur in registration if it’s been used before), and this will be what you use to refer to later when you want to filter results in dashboard and aerosense-tools.

To register, do:

gateway create-installation --config-file $GATEWAY_CONFIG_FILE

And follow the instructions. After this, the gateway should be ready for use.

Check

Once the above steps are complete, in the balena (or your own, for a manual installation) terminal, check the installation by typing:

gateway --help
_images/gateway-help.png

If the gateway is correctly installed, you should see this.

Usage

A gateway Command Line Interface (CLI) is provided to streamline gateway management and use. We recommend you get started with the gateway by looking at the following sections in order. Be sure to check out the tips!

Gateway CLI

The gateway has a CLI which means you can call it just like any other unix command.

It is called simply gateway. Once the code is deployed/installed, you can see the options and help by typing:

gateway --help

Or see more detailed help on a subcommand (eg start) with:

gateway start --help
Start

The start subcommand is overwhelmingly the most common you’ll use.

Once started, data is read continuously from the serial port, parsed, processed, batched into time windows, and either:

  • uploaded to an ingress Google Cloud storage bucket where it is cleaned and

    forwarded to another bucket for storage, or

  • saved locally as JSON files, or

  • both.

The start command also allows you to send commands to the base station (which will broadcast them to the nodes). The sequence of commands you send is called a “routine” and the commands can be sent automatically (for long term acquisition) or interactivel (for debug/test).

Automatic mode

Running the gateway in automatic mode doesn’t allow further commands to be passed to the serial port. Instead, a routine file must be specified, and the commands in it are issues automatically on your behalf, looping indefinitely.

Assuming you have your configuration and routine files set up per the instructions here, to start this mode, type:

gateway start

You can stop the gateway by pressing Ctrl + C.

Interactive mode

Running the gateway in interactive mode allows commands to be sent to the serial port while the gateway is running. A routine file can’t be provided if using this mode. Any commands entered interactively are logged to a commands.txt file in the output directory.

To start this mode, type:

gateway start --interactive

Typing stop or pressing Ctrl + C will stop the session.

Other options
  • The window size (default 600 seconds) can be set by using --window-size=<number_of_seconds> after the start command

  • You can store data locally instead of or at the same time as it is sent to the cloud by using the --save-locally option

  • To avoid sending any data to the cloud, provide the --no-upload-to-cloud option

Configuration Files

Configuration options for the gateway are supplied via a configuration file, which was set up in Configuration.

Once the gateway start command is invoked, the configuration is saved with the output data (if saving data locally). The configuration is also added to the metadata on the output files uploaded to the cloud, where it is used by the cloud ingress to populate the database.

Specifying other configuration files

The easiest way of specifying the file is to set the GATEWAY_CONFIG_FILE environment variable.

But, there are other ways. If the environment variable is not set, data-gateway looks for a file named config.json in the working directory, or the file path can be overridden in the CLI options (also see gateway start --help):

gateway start --config-file=</path/to/config.json>
Useful customisations

The most useful customisation is to add a measurement_campaign_reference field:

{
  "gateway": {
    // ...
  },
  "nodes": {
    // ...
  }
  "measurement_campaign": {
    // If you leave out this reference, a new one is created every
    // time you start the gateway. That allows you to then filter
    // down results in the dashboard to the exact run you're doing
    // right now.
    // But, if you're doing a series of related runs and want all
    // the results to be able to be merged, the set the reference
    // value here for continuity across runs. Don't forget to change
    // or remove this if you reuse the file for something else, though.
    "measurement_campaign_reference": "my-measurement-campaign",
    // You can enable further sorting of data by adding
    // campaign-specific labels
    "label": "run-1",
    // And add notes as aide-memoires.
    "description": "It's windy right now and the battery charged up overnight so we're taking the opportunity to run with mic and diff baros turned on."
  }
}
Further customisation

Any of the options in the data-gateway configuration module can be customised by updating entries in the configuration file.

Warning

Moving off the beaten track, especially customising things like handles and packet keys, you really have to know what you’re doing!

Here is an example of a more extensive configuration file.

{
  "gateway": {
    "baudrate": 2300000,
    "endian": "little",
    "installation_reference": "my_installation_reference",
    "latitude": 0,
    "longitude": 0,
    "packet_key": 254,
    "packet_key_offset": 245,
    "receiver_firmware_version": "1.2.3",
    "serial_buffer_rx_size": 100000,
    "serial_buffer_tx_size": 1280,
    "turbine_id": "unknown"
  },
  "nodes": {
    "0": {
      "acc_freq": 100,
      "acc_range": 16,
      "analog_freq": 16384,
      "baros_bm": 1023,
      "baros_freq": 100,
      "blade_id": "0",
      "constat_period": 45,
      "battery_info_period": 3600,
      "decline_reason": {
        "0": "Bad block detection ongoing",
        "1": "Task already registered, cannot register again",
        "2": "Task is not registered, cannot de-register",
        "3": "Connection Parameter update unfinished"
      },
      "diff_baros_freq": 1000,
      "initial_node_handles": {
        "34": "Abs. baros",
        "36": "Diff. baros",
        "38": "Mic 0",
        "40": "Mic 1",
        "42": "IMU Accel",
        "44": "IMU Gyro",
        "46": "IMU Magnetometer",
        "48": "Analog1",
        "50": "Analog2",
        "52": "Constat",
        "54": "Cmd Decline",
        "56": "Sleep State",
        "58": "Info Message"
      },
      "gyro_freq": 100,
      "gyro_range": 2000,
      "remote_info_type": {
        "0": "Battery info"
      },
      "mag_freq": 12.5,
      "mics_freq": 15625,
      "mics_bm": 1023,
      "max_timestamp_slack": 0.005,
      "max_period_drift": 0.02,
      "node_firmware_version": "unknown",
      "number_of_sensors": {
        "Mics": 10,
        "Baros_P": 40,
        "Baros_T": 40,
        "Diff_Baros": 5,
        "Acc": 3,
        "Gyro": 3,
        "Mag": 3,
        "Analog Vbat": 1,
        "Constat": 4,
        "battery_info": 3
      },
      "periods": {
        "Mics": 6.4e-5,
        "Baros_P": 0.01,
        "Baros_T": 0.01,
        "Diff_Baros": 0.001,
        "Acc": 0.01,
        "Gyro": 0.01,
        "Mag": 0.08,
        "Analog Vbat": 6.103515625e-5,
        "Constat": 0.045,
        "battery_info": 3600
      },
      "samples_per_packet": {
        "Mics": 8,
        "Diff_Baros": 24,
        "Baros_P": 1,
        "Baros_T": 1,
        "Acc": 40,
        "Gyro": 40,
        "Mag": 40,
        "Analog Vbat": 60,
        "Constat": 24,
        "battery_info": 1
      },
      "sensor_conversion_constants": {
        "Mics": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
        "Diff_Baros": [1, 1, 1, 1, 1],
        "Acc": [1, 1, 1],
        "Gyro": [1, 1, 1],
        "Mag": [1, 1, 1],
        "Analog Vbat": [1],
        "Constat": [1, 1, 1, 1],
        "battery_info": [1e6, 100, 256]
      },
      "sensor_coordinates": {
        "Mics": "mics_coordinate_reference",
        "Baros_P": "baros_coordinate_reference",
        "Baros_T": "baros_coordinate_reference",
        "Diff_Baros": "baros_coordinate_reference",
        "Acc": "accelerometers_coordinate_reference",
        "Gyro": "gyroscopes_coordinate_reference",
        "Mag": "magnetometers_coordinate_reference"
      },
      "sensor_names": [
        "Mics",
        "Baros_P",
        "Baros_T",
        "Diff_Baros",
        "Acc",
        "Gyro",
        "Mag",
        "Analog Vbat",
        "Constat",
        "battery_info"
      ],
      "sleep_state": {
        "0": "Exiting sleep",
        "1": "Entering sleep"
      }
    }
  },
  "measurement_campaign": {
    "label": "my-test-1",
    "description": null
  }
}

Routine files

Commands can be sent to the sensors in two ways:

  • The user manually typing them in in interactive mode

  • Creating a routine JSON file and providing it to the CLI

Creating and providing a routine file

A routine file looks like this:

{
    "commands": [["startIMU", 0.1], ["startBaros", 0.2], ["getBattery", 3]],
    "period": 5  # (period is optional)
}

and can be provided to the CLI’s start command by using:

--routine-file=<path/to/routine_file.json>

If this option isn’t provided, the CLI looks for a file called routine.json in the current working directory. If this file doesn’t exist and the --routine-file option isn’t provided, the command assumes there is no routine file to run.

Warning

If a routine file is provided in interactive mode, the routine is ignored. Only the commands entered interactively are sent to the sensors.

Routine file schema
  • The commands key in the file should be a list of two-element lists. Each two-element list should comprise a valid string command to send to the sensors and a delay in seconds from the gateway starting to run the command.

  • An optional period in seconds can be provided to repeat the routine. If none is provided, the routine is run once only. The period must be greater than each of the commands’ delays.

Example routine files
{
    "commands": [
        [
        "startDiffBaros",
        60
        ],
        [
        "startIMU",
        65
        ],
        [
        "getBattery",
        70
        ],
        [
        "stopDiffBaros",
        660
         ],
        [
        "startBaros",
        670
        ],
        [
        "startMics",
        1265
        ],
        [
        "stopBaros",
        1270
        ],
        [
        "stopIMU",
        1275
        ],
        [
        "stopMics",
         1280
        ]
    ],
    "period": 3600
}

Tips

Editing configuration and routine files

Often, whilst testing or getting set up, you’ll want to edit files on a device itself, particularly a configuration or routine file that you’re just trying out.

For this, the nano editor has been installed in the built containers so you can do:

$ nano $GATEWAY_ROUTINE_FILE

or

$ nano $GATEWAY_CONFIG_FILE

to edit either the routine or the configuration files.

SCP of files to/from BalenaOS

For 99% of the time, using nano to edit (or paste from your own preferred IDE) will be fine. Occasionally though, you’ll want to get files on/off a device. In particular, for debugging, you might run the gateway with the -l and/or --save-local-logs options, then want to directly inspect the data files that result.

However, the balena cli doesn’t support scp that well out of the box (although there are workarounds using tunneling).

To copy files between /data directory (on a container deployed by Balena) and your own machine:

  1. Install the balena CLI and do balena login

  2. Add your public ssh key to BalenaCloud and make sure you can use balena ssh correctly, following these instructions

  3. Check this GitHub issue. If closed with a new balena CLI feature, then follow those instructions instead. Otherwise use the following workaround.

  4. Install the ssh-uuid utility.

  5. Get the full UUID of the device:

$ balena devices
ID      UUID    DEVICE NAME   DEVICE TYPE     FLEET              STATUS IS ONLINE SUPERVISOR VERSION OS VERSION       DASHBOARD URL
7294376 4bfe19d fried-firefly raspberrypi4-64 aerosense/gateways Idle   true      13.1.11            balenaOS 2.98.33 https://dashboard.balena-cloud.com/devices/4bfe19d3651d27dc89d4b1a8c95061fa/summary

$ balena device 4bfe19d
== FRIED FIREFLY
ID:                    7294376
...
UUID:                  4bfe19d3651d27dc89d4b1a8c95061fa
...
  1. Get the App ID (which is actually the Fleet ID), in this case 1945598:

balena fleets
ID      NAME                SLUG                          DEVICE TYPE         ONLINE DEVICES DEVICE COUNT
1945598 gateways            aerosense/gateways            raspberrypi4-64     1              3
  1. We’ll be copying from balena’s Host OS, not from the container. The /data directory isn’t mounted in the same place as when you’re inside the container. So the root of the data folder is:

/var/lib/docker/volumes/<APP ID>_resin-data/_data/
  1. So for example, to copy a file from within the /data folder from remote to local, we do:

scp-uuid 4bfe19d3651d27dc89d4b1a8c95061fa.balena:/var/lib/docker/volumes/1945598_resin-data/_data/gateway/20221122T100229/window-2.json .
  1. The scp command should work recursively with folders, but take care because they can be large if a long session has taken place.

Daemonising

Warning

Daemonisation cannot happen reliably until :ref:<https://github.com/aerosense-ai/data-gateway/issues/119>`_ is solved.

During the aerosense project, Balena has made it so convenient to shell in and manage sessions that it’s the only thing we’ve actually done.

However, this sometimes means babysitting the gateway and takes up time - that’s fine in the very early days, but if you are setting up a longer-term deployment of aerosense test rig) you should daemonise the gateway start command.

This basically means set the system up to:

  • start the gateway along with the rest of the OS on boot

  • restart the gateway program if it crashes

There are lots of ways of doing this but we strongly recommend using supervisord, which, as the name suggests, is a supervisor for daemonised processes.

Install supervisord on your system:

# Ensure you've got the latest version of supervisord installed
sudo apt-get install --update supervisord

Configure supervisord to (more info here) run the gateway as a daemonised service:

Warning

We’ve not actualy done this (see the warning above) but it should look very similar to this:

sudo gateway supervisord-conf >> /etc/supervisord.conf

Restarting your system, at this point, should start the gateway process at boot time.

You can use supervisorctl to check gateway status:

supervisorctl status AerosenseGateway

Similarly, you can stop and start the daemon with:

supervisorctl stop AerosenseGateway
supervisorctl start AerosenseGateway

Cloud

The majority of aspects of the aerosense cloud are documented in other repositories (eg aerosense-tools, aerosense-dashboard). However, on the gateway / ingress side it’s worth collecting some notes here.

Cloud function

We’ve written a Google Cloud Function (a serverless deployed app) that, when a window is uploaded to the storage ingress bucket, pre-processes/cleans it before moving it to a more permanent home in a different bucket. The ingress bucket is currently set to aerosense-ingress-eu and the output bucket is set to data-gateway-processed-data. Both are part of the aerosense-twined Google Cloud project. You can view the deployed Cloud Function here - it’s called ingress-eu.

There is no need to read further about this if you are only working on data collection from the serial port.

Developing the cloud function

The entrypoint for the cloud function is cloud_functions.main.clean_and_upload_window and it must accept event and context arguments in that order. Apart from that, it can do anything upon receiving an event (the event is an upload of a file to the ingress bucket). It currently uses the window_handler module.

Dependencies

Dependencies for the cloud function must be included in the requirements.txt file in the cloud_functions package.

More information

More information can be found at https://cloud.google.com/functions/docs/writing

Manual redeployment

The cloud function package is included in this (data-gateway) repository in cloud_functions, which is where it should be edited and version controlled. When a new version is ready, it must be manually deployed to the cloud for it to be used for new window uploads (there is no automatic deployment enabled currently):

cd cloud_functions

gcloud functions deploy ingress-eu \
    --runtime python38 \
    --trigger-resource <name_of_ingress_bucket> \
    --trigger-event google.storage.object.finalize \
    --memory 1GB \
    --region <name_of_region> \
    --set-env-vars SOURCE_PROJECT_NAME=<source_project_name>,DESTINATION_PROJECT_NAME=<destination_project_name>,DESTINATION_BUCKET_NAME=<destination_bucket_name>

Creating Service Accounts

The gateway that you install on a turbine needs to upload data to Aerosense Cloud. However, we don’t want “just anybody” to be able to write data to the store - that leaves us vulnerable to a wide range of attacks. So the gateway must authenticate itself with the store prior to upload.

To enable the gateway to authenticate itself, we use a service account, which is a bit like a user account (it has an email address and can be given certain permissions) but for a non-human.

Here, we will create a service account for a deployment - this will result in a single credentials file that we can reuse across the gateways (turbines) in the deployment to save administrative overhead maintaining all the credentials.

Choose your service account name

When creating service accounts, whilst any name will work, sticking to a naming convention is very helpful. There are three kinds of service account names:

For deployment with Balena

If you’ve deployed gateway code using Balena, the newly added device will be given a “coolname” like fried-firefly or holy-lake. Use this name as your service account name, which makes it super easy to diagnose any problems, or restrict permissions if a device is lost.

For manual deployment

Use a name that’s prefixed with as-deployment-, eg as-deployment-tommasso. These service accounts should be set up to have consistent permissions as with a balena deployment but simply indicate its for manual trialling of uploads from the gateway.

For development

Developers will work across the entire stack of cloud functions, gateway and other aspects of the project like dashboard or tools. Thus developers’ service accounts are expected to have a wide and varied range of permissions. The name should be in the form developer-<github-handle>, eg developer-thclark.

Create account and credentials

Log in to the aerosense-twined project on Google Cloud Platform (GCP) and work through the following steps:

1. Go to IAM > Service Accounts > Create

_images/1-go-to-iam-service-accounts.png

Go to the service accounts view, and click “Create Service Account”

2. Create the service account

_images/2-create-service-account.png

The service account name should be per the naming guidelines above. In this image, as-deployment-gatewaytest was used.

3. Skip assignation of optional roles and users (for now)

_images/3-no-grants-or-users.png

Do not assign roles or users for now. We’ll assign the permissions for the specific resource(s) in step 6.

4. Create and download a private JSON key for this Service Account

_images/4a-create-key.png

Find your newly created service account in the list (you may have to search) and click ‘Create Key’.

_images/4b-key-should-be-json.png

Choose the default JSON key type.

_images/4c-key-will-be-saved.png

Google will create a key file and it will be downloaded to your desktop.

5. Locate the ingress bucket in the storage browser, and click on “Add Member”

_images/5-locate-aerosense-ingress-bucket.png

From the left hand navigation menu, change to the Storage Browser view and locate the aerosense-ingress-eu bucket. Select it, and click “Add Member” in the right hand control pane.

6. Assign ``Storage Object Creator`` permission

_images/6-add-storage-object-creator.png

We wish to add the service account created above to this bucket’s permissions member list. Use the email address that was generated in step 2 to find your new service account and add it. We want the service account to have minimal permissions which in this case means assigning the role of Storage Object Creator.

And you’re done! Keep that downloaded permission file for later.

Attention

Do not add this private credentials file to a docker image, email, skype/meet/zoom, dropbox, whatsapp, git commit, post in an issue, or anywhere else.

Doing so will earn you the penance of flushing and rotating all the system credentials.

Uploads

Once running gateway start, windows are uploaded to the cloud ingress bucket (unless the --no-upload-to-cloud option is used).

If the connection to Google Cloud fails, windows will be written to the hidden directory ./<output_directory>/.backup where they will stay until the connection resumes. Backup files are deleted upon successful cloud upload.

Hardware and firmware versions

Unfortunately, we can’t carry out automatic dependency (version) resolution of the hardware or firmware that data-gateway runs on as the packages are not controlled by pip (the python dependency manager). We can, however, manually specify which hardware/firmware versions are compatible with this package. So far, the following versions of this package have been written to work with the respectively listed versions of firmware/hardware:

0.0.5

0.0.4

As this is only the first version of data-gateway, we decided to provide some kind of mapping of hardware/firmware to software when support for a new version of hardware/firmware is needed to be used, rather than providing this right now. As new versions of the firmware/hardware are produced, the authors will need to supply us with test fixture data so we can make sure data-gateway is compatible.

More information and discussion on this topic can be found here.

Attention

This library is in experimental stages! Please pin deployments to a specific release, and consider every release as breaking.

Version History

Version history can be found here on GitHub.