Porting Libiota to Other Platforms

Introduction & Purpose

The goal of this document is to give an overview of the separation between platform-dependent and platform-independent code in libiota, and to enable developers to spin their own implementations for platforms that are not yet supported on the official branch. As part of this process, we will go over some general requirements imposed by libiota on the platform, details of the provider layer, and things to consider before starting.

We make the assumption that you have already worked with libiota on a supported platform and have a general idea of how to handle the application-side of things (i.e. creating devices/interfaces, adding config handlers, etc.)

There are tentative plans to allow contributions to libiota for other platforms. If you spin up your own implementation, consider open-sourcing it through the official repository to encourage vendors to develop with your platform! You can talk to your Google representative for details if it comes to fruition, or reach out to the community.

Overview of Provider Layer

In the libiota repository, code that lives under the src/ directory is considered to be platform independent, while platform specific code resides under the platform/ directory. The general logic of libiota is platform independent; it is composed of state machines that construct the necessary messages to the server, determine the appropriate time to deliver those messages, and process incoming messages from the various endpoints we connect to. The elements that are platform specific are:

  • Retrieving time from the device (i.e. wall clock time, system ticks)
  • Persisting information on the device (i.e. writing and reading from persistent storage)
  • Making network requests (i.e. sending and receiving HTTPS messages)

Each of these elements: time, storage, and http client, are what libiota refers to as providers. Providers provide generic APIs for the platform-independent code to call that will in turn issue the calls to the platform SDK.

A simple example would be time acquisition:

// Time provider function signature:
typedef time_t (*IotaTimeGet)(IotaTimeProvider* provider);

// On Linux, we would pass a pointer to this function:
time_t posix_iota_time_get_(IotaTimeProvider* provider) {
  return time(NULL);
}

// On Marvell, we would pass a pointer to this function:
time_t mw_iota_time_get_(IotaTimeProvider* provider) {
  return wmtime_time_get_posix();
}

Note that both function signatures match the IotaTimeGet typedef. This function pointer is passed along to the platform independent code, which will use it to invoke the platform API when it needs to get time. In order to spin up a new platform the set of functions each provider defines must be implemented using that platform's APIs.

The provider definitions can be found in the include/iota/providers/ directory. Once you have a sense of what functions are expected to be implemented, you can move on to the actual process of porting.

Considerations of Platform Requirements

Keep in mind that while the provider layer meets the bare minimum requirement of what the platform needs to offer to run, there are implicit assumptions being made about the system. You should take care to determine whether the new platform you are attempting to port to meets these assumptions. This is especially true when targeting systems with lower specifications than the ones supported officially in the repository.

A summary of the implicit requirements is provided at the end of this section for easy reference.

Time Provider Considerations

The time provider defines IotaTimeGet, IotaTimeGetTicks, IotaTimeGetTicksMs. The latter two functions are usually found on all platforms; it provides a monotonic counter to schedule events. The first function expects a return value of seconds since the Unix epoch. The implicit assumption being made is that there is either

  1. an NTP solution on the board, or
  2. a method by which a developer could integrate a separate NTP solution on the board. Libiota does not perform NTP on its own and expects that the IotaTimeGet function can return the epoch seconds correctly at any point. Ideally, the time should be acquired before the libiota daemon starts. Time information is used by libiota to schedule events, so the platform must also support 64-bit arithmetic on the return values of these methods.

Http Client Considerations

The http provider defines SendRequest, SetConnected, FlushRequests and Destroy methods. The details of their behavior is outlined in the provider headers. Libiota imposes a number of expectations on the HTTP stack.

Asynchronous and Concurrent Connections

Platforms must be able to send HTTPS request asynchronously and concurrently. Libiota maintains a persistent connection with Google Cloud Messaging (GCM) to receive push notifications for commands. As a consequence, it cannot wait for the request to complete (asynchronous) and must be able to send other requests in the meanwhile (concurrency). Platforms that lack this support would have to poll the server for updates, making them impractical for production.

Request Cancelling and Connectivity Event Subscription

Platforms must provide a means to cancel ongoing requests and subscribe to changes in network connectivity. Libiota does not manage connectivity and should be notified when it is gained or lost. In turn, it may cancel requests it has already issued. For instance, the Linux platform achieves this through cURL which provides request handles that can be used to clean up request resources at any point.

TLS/SSL Requirements

If your platform comes with a standard TLS stack (like OpenSSL), then you probably have most of these requirements covered. Certificate validation for host name and time validity must be available, and must be able to verify host names of the RFC standard length. The TLS handshake should be able to send a Server Name Indication (SNI). The TLS 1.2 standard and (at least a subset of) the following cipher suites should be available:

  • ECDHE-ECDSA-AES128-GCM-SHA256
  • ECDHE-RSA-AES128-GCM-SHA256
  • ECDHE-ECDSA-CHACHA20-POLY1305
  • ECDHE-RSA-CHACHA20-POLY1305
  • ECDHE-ECDSA-AES128-SHA256
  • ECDHE-RSA-AES128-SHA256

Libiota uses a trust root based upon the certificates provided at pki.google.com. The PEM file that comes from this site is ~50K. As multiple requests may be issued at once, platforms must be able to load these certs and ideally share them across connections. Duplication of these certs for every connection will likely cause fragmentation. Certificates are only provided in the PEM format, so if your platform uses a proprietary protocol ensure that it can convert.

Connection Setup Time

The total time setting up a connection, sending a request, receiving a response and doing tear down should be 500 ms on average for an empty request/response.

Request Header and Body Parameters

Support for all HTTP request types is expected (i.e. including PATCH). There is no official minimum on header size yet, but in practice, we don't expect to send an individual header of greater than 500 bytes. The body length must be able to support a minimum of 8192 bytes.

Connection Count

In an idle state, libiota will maintain one persistent connection with the GCM notification channel. As commands come in, we anticipate an additional two to three requests being sent out concurrently. We require at minimum two connections to function, but need four dedicated connections for optimal use. While libiota provides a means to constrain its connection usage, a reduced connection count has user experience implications (e.g. state updates to the server may be delayed, floods of commands might take longer to resolve).

Keep in mind that four connections may not be enough for a real application; developers who use libiota on your platform may have other integrations that require connections.

Summary of Considerations

Below is a summary of the above paragraphs into a list of considerations for each provider.

  • Platforms must ensure that they can provide a way to satisfy:
    • Requirements relating to the time provider:
      • Access a monotonic (within a boot cycle) clock on the system.
      • Perform 64-bit integer arithmetic.
      • Synchronize time with an NTP service.
    • Requirements relating to the http client provider.
      • Send HTTPS requests asynchronously and at least two concurrently.
      • Cancel ongoing requests.
      • Subscribe to network connectivity changes.
      • Load root certificates of size ~50K, without duplication.
      • Validate hostnames up to the RFC standard length.
      • Validate certificate time.
      • Send SNI during the SSL handshake.
      • Support the cipher suites mentioned in the above section.
      • Have a setup, send, receive, teardown connection time of ~500ms.
      • Handle requests that have headers up to 500 bytes, and body lengths of at least 8192 bytes.
      • Dedicate at least two connections to libiota and ideally four connections.

If your platform meets these requirements it is likely a good candidate for libiota.

Porting Procedure

Porting boils down to the following four tasks:

  1. Setting up allocation macros and logging.
  2. Implementing the three providers for the platform.
  3. Writing a daemon implementation for your platform.
  4. Create an end-to-end example for your platform.

We will briefly discuss what needs to be done in these steps. You should take a moment to run through each step, since it might suit you to take on these tasks in a different order.

Setting Up Allocation Macros & Logging

Before implementing the providers, you‘ll need to modify the IOTA_PLATFORM_ALLOC macro. This macro is used by the platform-independent code when it needs to allocate memory, and must be mapped to your platform’s allocator. Refer to include/iota/alloc.h, and add your platform:

#ifdef __FREERTOS
#include <wm_os.h>
#define IOTA_PLATFORM_ALLOC(size) os_mem_alloc(size)
#define IOTA_PLATFORM_FREE(ptr) os_mem_free(ptr)
#define IOTA_PLATFORM_HEAP_FREE() os_get_free_size()

#elif ...

// Other platform alloc macros.
#elif __YOUR_PLATFORM__
#define IOTA_PLATFORM_ALLOC(size) your_platform_alloc(size)
#define IOTA_PLATFORM_FREE(ptr) your_platform_free(ptr)
#define IOTA_PLATFORM_HEAP_FREE() \
        your_platform_heap_stats() or set to 0 if unsupported.

If you do not define these first, libiota won't be able to allocate memory, so make sure to do this before implementing anything else. Of course, ensure that __YOUR_PLATFORM__ is defined when you compile for your platform.

In order to enable logging, there are two paths. By default, libiota assumes the existence of the standard IO vprintf function. If your platform implements this, logging should work out of the box. If it does not, you can either (1) implement vprintf yourself (take a look at the platform/qc4010 directory), or (2) define an IotaLog function and call set_log_function when starting your application. Libiota prepends a header containing a timestamp before every log message. In order to fetch the timestamp, the logging module needs a handle to the time provider, which the application can provide using iota_set_log_time_provider(<time-provider>). If this handle is not set, then timestamp will be displayed as 0.

Implementing the Providers

The provider layer for the storage and time providers is the best place to start, as these providers are usually the most trivial to implement. Note that every function defined in the providers has as its first argument a pointer to that provider. The provider you pass into the platform-independent code will be forwarded along to these functions. This is really useful when you need context:

typedef struct {
  IotaStorageProvider provider;
  psm_hnd_t  psm_handle_;
} MwStorageProvider;

IotaStatus mw_iota_storage_clear_(IotaStorageProvider* provider) {
  psm_hnd_t psm_handle = ((MwStorageProvider*)provider)->psm_handle_;
  psm_object_delete(psm_handle);
  // ... code ...
}

After implementing these functions, take on the HTTP provider. The current strategy is to have the SendRequest method establish the connections and have a separate ProcessResponses method to deal with response data. The section covering the creation of a daemon for your platform will complete the picture, but for now, assume that there will be a single thread responsible for calling SendRequest and later calling ProcessResponses. The key point is that it is a single thread; sending a request and processing responses will not interfere with each other.

The HTTP provider high-level task list is as follows:

  • When sending a request:
    • The response of a request will be processed in a different method, so the context of the request must be preserved within the provider (this necessarily includes the user_data, stream_callback and final_callback of the request).
    • You should allocate an IotaHttpClientResponse for a request; the response can be as large as 8192 bytes.
    • If upon attempting to send a request there is a failure, the method should still return success, but flag the request context as having failed.
  • When reading a response:
    • If the request this response is associated with has a stream_callback:
      • Write the received data (do not wait for a complete message) to the IotaHttpClientResponse, and invoke the stream_callback.
      • Check the stream_callback's result and consume the bytes from the IotaHttpClientResponse.
    • If the response is complete, you should write any remaining data into the IotaHttpClientResponse and invoke the final callback, and then clean up the resources for the connection.
    • If at any point a failure is met (e.g. the socket closes unexpectedly, the stream callback returns a failure code), no further work should be done on the request, and the final callback should be invoked with the failure.
  • When flushing requests (clearing requests):
    • You should close all existing connections and not invoke any callbacks.
  • When setting the connection state:
    • If the connection state has been set to disconnected, you should stop the ongoing connections and invoke their final callbacks with the connectivity failure status.

While the above information should be in the provider headers, it is best to use one of the platform implementations as a reference (the host platform is recommended).

Testing the Provider

Once the providers for your platform are implemented. The provider test-suite in the provider_tests/ directory can be used to validate provider implementation. Each of the providers has a platform independent test module based on Unity test framework. The provider_testrunner/ directory has platform specific example for a lightweight application that creates the providers and runs the provider test suite. Use the examples as reference to build your sample application for your platform and run the provider tests. The tests are expected to run inside the target platform and all the provider tests should pass.

The test runner high-level task is as follows:

  • Perform device or platform specific initialization. Eg., Console setup, CLI setup.
  • Add provisioning mechanism to connect to Internet as httpc provider would require active internet connection to perform the test.
  • Create the providers for your platform.
  • Add means to trigger the test-suite by passing the providers created. (The libiota supported platforms make use of CLI module to manually trigger the test-suite.)

The provider tests make use of IOTA_LOG to print the test results, so make sure that logging is setup for the platform before running the test.

Implementing the Daemon

You will find that each platform has its own daemon.h/c files. Libiota defines an IotaDaemon, which is simply a structure that holds accessors to the rest of the state machine. The bare minimum setup of this structure can be followed in host_iota_daemon_create. The base variable is what's important. An IotaDaemon has four fields, all of which need to be set:

typedef struct {
  IotaDevice* device;           // Put IotaDevice* here.
  IotaSettings settings;        // Copy keys into settings.oauth2_keys
  IotaWeaveCloud* cloud;        // Set to iota_weave_cloud_create(..)
  IotaProviders providers;      // Initialize with provider structures.
}

Consider the create method for your platform now. If you‘ve already implemented the providers, creating the final structure is trivial; we allocate space for the individual provider pointers and copy the function pointers in. Refer to any of the provider implementations. Assume for now that IotaDevice and the IotaOauth2Keys are provided as inputs into your create method; creating the daemon is just a matter of copying these pointers into the structure. The cloud can be created just by passing in the other daemon variables as arguments. For the purposes of this document, we’ll skip over how to create the device and keys. The example files and code lab go over it in better detail.

Once the method to create the daemon is in place, we run the daemon using the following loop, refer to host_iota_daemon_run_ in platform/host/daemon.c.

// Purposefully simplified for explanation:
while (true) {
  if (!host_daemon->is_connected) {
    continue;
  }

  if (!iota_weave_cloud_run_once(daemon->cloud)) {
    curl_iota_httpc_execute(daemon->providers.httpc, 1000 /* ms */);
  }
}

The iota_weave_cloud_run_once section of this loop drives all the logic of libiota. The call to run the cloud runs the internal state machine. Using the various providers, it will decide its next step and issue requests if it needs to. This method will return false when it is waiting for the state to change. We run a method on the HTTP provider to process responses, potentially invoking the callbacks and changing the state of the daemon. With just the above two methods you can reach the bare minimum needed to run libiota. Of course, for developers, an infinite while loop is hardly acceptable, especially when other tasks might need to be run. Therein lies all the extended complexity of the daemon.c file. The common solution for all implementations of daemon.c add the following functionality:

  • The platform_daemon_create first spawns a new thread, then creates the daemon and starts running it.
    • The creation of the IotaDevice should be done on this new thread.
    • The application passes a callback which is forwarded to the newly spawned thread; the thread invokes this callback to create the IotaDevice.
  • The daemon exposes a job queue that the application posts to in order to affect state.
    • In order to stop the daemon, the application might call platform_daemon_destroy, which in turn posts a destroy job to the daemon.
    • In order to register a device, the application might call platfrom_daemon_register, which posts a registration job with a ticket so that the daemon can invoke the register call on its cloud.
    • The daemon checks its job queue every run of the loop to see if a new task was posted. The implementation of this queue differs for each platform (some platform SDKs provide thread-safe queues).

You will likely need to implement the above features on the platform daemon to enable real applications which will likely need to perform other tasks of their own.

Creating an End-to-End Application

If the above two steps are complete, you‘ve successfully ported libiota onto your platform! However, as mentioned in the Considerations section, there’s still a little work to do to claim that your device is libiota-ready. The development frameworks provided in examples/<platform>/framework/dev_framework.c are good to refer to, but the focus is on the primary needs of (1) WiFi acquisition, (2) a method to subscribe to or notice when connectivity is gained/lost, and (3) an NTP solution.

Of course, every platform will have its own pattern of how to subscribe to WiFi events and perform time synchronization, so we leave it up to you to figure out the details. More important than implementing these features is ensuring that they can be implemented. Take this to mean that while it is possible to test and develop without these features, a product cannot be launched without them! Creating an end-to-end example is a great way to ensure that it can be done, as well as provide application developers a sense of the total network and space constraints that they will have to deal with on a production device.

Make sure that you can run our examples! You will probably need to create something similar to our dev_framework files to do so.