About Us
Product
Applications
News
Papers
Contact
 
 
 


Technology

 

 

 

While working on ad hoc networking, we grew impatient with the abyss separating the academic and industrial worlds (theory and practice, that is).

It is all like the magic diet: judging by the number of advertised miraculous cures there should hardly be any problem at all, yet … nothing REALLY works. Daily news on new routing protocols, conferences, workshops … and nothing practical.

Consortia-initiated attempts at standards, e.g. ZigBee®, are bound to disappoint with suboptimal offers. Consortia have never been good at thinking simple and small.

Our way has been criticized by the academia for “lack of focus”. We violated the unwritten rule of presenting one thing at a time. Are we talking operating systems? Routing protocols? Is it about simulation? Memory footprint? What is it?

Well … it is … holistic. We are proud that it doesn’t fit the fragmented portions of the commonly accepted picture. The picture is nice, but the pieces don’t fit the puzzle.

 

Yes, we’ve been working with “independent components,” too. The fool’s errand of fitting square pegs into round holes has earned many noble names (anybody into system integration?) There is no “system integration” in our networks: they ARE ad hoc. Topology and connectivity – obviously, but also the hardware, praxis, operations are all ad hoc; they’re holistically ad hoc. The technology making this approach feasible is our main forte.

We adduce practical embedded applications, also ad hoc, also networks: olso networks. We would like to present them to a wide industrial audience, to prosper and gain resources and topics for further research. To the scholars we say: let’s do something useful now … for a change. While at it, let’s influence our students, so they’ll start the self-propelling spiral of innovations interlacing novel applications, all stemming from a widespread and easily applicable technology.

Click here for white papers and downloads.

 

What is an olso (wireless) network

An olso network is a dynamic collection of nodes forming an ad hoc distributed system to achieve a certain well defined functionality. The nodes may be mobile or stationary, may all look the same or have diverse hardware resources. The required functionality may be placed in, or may cross any conceivable “application space,” be it consumer electronics, industrial control, health care, entertainment, automotives. The key point is that the requirements not merely influence, but truly define all needed components, including network “layers” and hardware. Some of those components are physical, some are conceptual, but only the complete functionality do we call an “application.” There is no difference between “network” and “application” in this approach.

All nodes in an olso network have a common blueprint. At Olsonet, we have built such a blueprint (call it practical abstraction) to blend ad hoc networking with the characteristics of the applications that we dealt with in commercial as well as academic projects. Only with this blueprint in hand can we claim that “custom development of everything” to the specification is feasible, i.e. not prohibitively expensive or lengthy.

Hardware

  • Off the shelf microprocessors (MSP430, eCOG1, …)
  • RF modules (RFM, Chipcon, Xemics, Nordic, …)
  • NVM (non-volatile memory; EEPROM, flash, ferrite beads)
  • UART/ USB
  • Ethernet
  • LCD
  • LEDs
  • I/O ports, ADC, DAC functionality

Many other options can be accommodated. For obvious reasons, hardware is tailored to the application, and, typically, only a subset of the above is practical in a given deployment.

Software

Our software is based on PicOS, which is our proprietary operating system for microcontrollers with small memory. To give you an idea, we can comfortably run multithreaded praxes taking full advantage of mesh forwarding in less than 1KB of RAM. The typical amount of RAM in our node (determined by the most popular representatives of the MSP430 microcontroller family) is 2-10KB. The complete system is organized in a layer-less, yet highly modular structure, which looks like this:

PicOS

Traditional multithreading tends to be costly in terms of RAM requirements. This is because each thread needs a separate (pre-allocated) stack. PicOS solves this problem by running all threads on the same stack. While this imposes certain restrictions on thread scheduling, we have turned those restrictions into advantages.

Formally speaking, PicOS threads are co-routines with multiple entry points and implicit control transfer. This means that they look like automata, i.e., descriptions of state transitions in a reactive system. Here is a sample thread code:

The arrows indicate possible state transitions triggered by the occurrence of events awaited by the thread. This novel programming paradigm has a number of important features very useful from the viewpoint of implementing embedded reactive applications.

  • The states (entry points) introduce convenient grains of execution. This greatly simplifies synchronization problems among multiple threads, which only synchronize on the boundaries of those grains. There are no race conditions, unexplained hang-ups, and other nuisances haunting traditional multithreaded programs.

  • The code is reusable. The FSM structure goes well with the concept of design patterns. Many PicOS threads are in fact patterns covering whole classes of specific reactive processes.

  • Owing to the fact that re-scheduling can only occur at state boundaries, PicOS threads can all comfortably share the same stack. This makes it possible to run a large number of threads in trivially small memory. For illustration, the amount of RAM needed to describe one thread is 20 bytes.

One more important advantage of PicOS is the ease of turning its programs into simulation (emulation) models, which can be done mechanically. This is because PicOS descends from (and is closely coupled with) a high-fidelity network simulator.

VNETI

VNETI stands for Virtual NETwork Interface. Owing to the fact that traditional protocol layering gets in the way of wireless networking (especially in the context of small footprint devices), we have implemented a radically different approach to organizing networking software. VNETI presents a small collection of orthogonal tools providing well-defined interfaces to three types of modules:

  • The praxis. This is the API whereby the application can carry out standard functions like setting up network connections and sending or receiving messages over the network.

  • The low-level RF driver (the PHY interface). Using these tools the driver can pass the packets received by the RF module to the “upper layers” as well as obtain outgoing packets to be sent out.

  • The plug-in. An arbitrary number of plug-ins can be plug into VNETI to describe special processing for selected types of packets.

Look at it this way. VNETI offers built-in and generic means for buffering packets, moving them among different queues, and locating the relevant components of their payloads. A plug-in may intercept some or all packets arriving from the network or from the application, modify them, insert new packets into queues, and so on. All this happens in a way that preserves the built-in API and PHY interfaces, which are formally independent of the configuration of plug-ins. In contrast to the traditional layered approach, a VNETI plug-in acts in a holistic manner: it cuts through the (nonexistent) layers. For example, the same function can sensibly affect the application’s perception of a packet, as well as interact with the conceptual counterpart of the traditional MAC-layer.

TARP

TARP (the Tiny Ad-hoc Routing Protocol) is Olsonet’s proprietary routing solution being yet another exemplification of our unique holistic view on wireless networking. To see this, envision a traditional ad-hoc routing scheme (e.g., AODV in ZigBee®), which operates like this:

  • Nodes advertise themselves by broadcasting HELLO messages. This way a node is able to learn the identity of neighbors, i.e., other nodes within its communication range.

  • When node S wants to establish a data path to node D, it initiates a route discovery procedure, whereby the prospective intermediate nodes exchange special messages to figure out the best path among them.

  • Once the best path has been established, every node on it knows the identity of the next hop neighbor on the path. On every leg of its trip, the packet is forwarded to a single specific neighbor.

  • If the path breaks (because of mobility or node failure), the first node discovering the breakage starts a route recovery procedure, which essentially works in the same way as the original route discovery.

The network combines broadcasting (HELLO messages, route discovery or recovery) with point-to-point transmissions (the actual forwarding). It operates in several different MODES (neighborhood identification, route discovery, forwarding, failure detection, route recovery). The implied assumption is that forwarding is something special and quite different from the other modes. In our opinion, there is no rationale for this assumption! It appears to us an atavistic desire to see wires where they do not exist. This is not ad-hoc routing! This is routing along virtual wires laid in an ad-hoc and potentially dynamic environment.

Any packet forwarded to a specific next-hop node (along the red arrows) is in fact overheard by all one-hop-neighbors of the forwarding node (transparent arrows). The best path is an illusion. There is no such thing in the network any more than there are direct links between adjacent pairs of nodes on the alleged path. Isn’t “wireless” another word for “link-less?”

This is the right way:

In the traditional approach, the dilemma of a forwarding node is this “To which neighbor should I direct the packet?” With TARP, the forwarding node has no dilemma at all – it simply sends the packet out. Note that the net outcome is the same in both cases: the packet has been broadcast to all nodes in the neighborhood.

But now a receiving node faces a dilemma: “Should I become a forwarding node for the received packet?” Thus, TARP changes the paradigm of forwarding: the basic question is not “where?” but “whether?” With a bit of insight, this change brings about the following features:

  • Controllable redundancy of paths. Alternative paths can be explored simultaneously with no need to “recover” them upon a node’s disappearance or failure. This may be highly advantageous from the viewpoint of reliability and fault tolerance.

  • Mode-less operation and no bureaucratic traffic. All packets ever transmitted through the network originate at the application.

  • No need to re-frame packets on every hop. As the forwarded packet is not addressed to any specific neighbor, it needs no neighbor address. This avoids extra overhead, which tends to be particularly painful for short packets, as most packets in sensor networks actually are.

  • Intermediate nodes, i.e., ones that are never sources nor destinations need no addresses.

  • Simplicity and automatic scalability: route quality trades for node footprint.

TARP solves the dilemma of a receiving node by applying a chain of rules to the received packet. Each rule bases its decision on some information cached by the node while listening to the packets. When a rule succeeds, the packet is dropped (the node concludes that it shouldn’t forward the packet). Otherwise, the next rule takes over. If all the rules have failed, the packet is rebroadcast. Here is how it all looks:

The rules are implemented in such a manner that they fail if not enough information is available in the cache to make an authoritative decision. This way a node with less memory will tend to drop fewer packets, which may result in more redundant (suboptimal) routing. But no packets will be lost (dropped incorrectly) because of the lack of storage. Thus, TARP automatically trades the quality of routes for device footprint.

The quintessence of TARP is the so-called SPD rule (for Suboptimal Path Discard), which acts to shrink the population of forwarding nodes to a controllable-width stripe along the shortest paths between source and destination. The width of this stripe may be controlled by the application for the desired degree of redundancy.

VUEE: Virtual Underlay Execution Engine


Our software development platform is closely tied up to a powerful simulation engine, which allows us to run and test networked applications in a high-fidelity virtual environment before burning them into real hardware. This is not a simulation in some simplified model, but DETAILED EMULATION, i.e., true execution of the completely functional application in a realistic, albeit virtual, world.

The virtual execution can be carried out in the so-called visualization mode, whereby the network can interact with the human user in real time. If the model complexity makes it impossible to catch up with the emulation to the real time behavior of the target network (e.g., too many nodes), the user can decide on the suitable slow-motion factor. External components, like the signals on I/O ports, UART I/O can be described by prearranged timed scripts, or can be handled by external agents communicating with VUEE via TCP ports. For example, the model can be easily interfaced to the real-world programs intended to communicate with real nodes via OSSI.

Interested in a commercial development? Many scenarios and ways of co-operation are feasible. We outline just one of them: custom development. Please note that we engage in research and academic projects only with our established partners. Only commercial projects fully funded by the customers start via this link: NewProjects.

If you manufacture microprocessors or low power radios, and are interested in having your products supported in our platform support, please follow up to Porting.