Monday, April 24, 2017

Node Red Dashboard for Raspberry Pi

What is Node Red?




Node-RED is a programming tool for wiring together hardware devices, APIs and online services. It was developed as a visual programming tool for the Internet of Things. It also allows you to produce and publish a funky web based dashboard with one click.


Node-RED includes a browser-based editor that makes it easy to wire together flows using the selection of nodes in the side menu. Flows can be then be deployed to run in a single-click. JavaScript functions can be created within the editor to customise the messages passed between nodes. A built-in library allows you to save useful functions, templates or flows for re-use.

The light-weight runtime is built on Node.js, taking full advantage of its event-driven, non-blocking model. This makes it ideal to run on low-cost hardware such as the Raspberry Pi as well as in the cloud.

Nodes can be anything from a timer to trigger events to a Raspberry Pi GPIO output used to turn on a LED (or salt lamp in our example). With over 225,000 modules in Node's package repository, it is easy to extend the range of nodes to add new capabilities. As we will demonstrate there are packages available for the Raspberry Pi and Sense Hat. The flows created in Node-RED are stored using JSON.

Node-RED was developed by IBM and in 2016, they contributed Node-RED as an open source JS Foundation project.

The Himalayan Salt Lamp Project




My wife likes salt lamps. Salt lamps allegedly remove dust, pollen, cigarette smoke, and other contaminants from the air. How effective this is I don't know and it is really irrelevant, as I said my wife likes them! Salt is very hygroscopic, that is it absorbs water - this is the basis of the claimed health benefits, the salt also absorbs any foreign particles the water may be carrying. The water then evaporates when the lamp is switched on leaving the contaminants behind entrapped in the salt.

Salt is so hygroscopic that it readily dissolves in the water it absorbs: this property is called deliquescence. It is of course a problem if your expensive Himalayan salt lamp dissolves into a puddle of salty water, especially if it is connected to 240VAC. In our house this melting process starts at relative humidities above 70%.

The solution is to turn your lamp on if humidity gets above 70%. This seemed like a good excuse to introduce the start of our home automation hub and learn about node_RED. Turning a lamp on and off based on humidity and time (lamp goes on at 5pm and off at 10pm) is trivial using Python so we wont cover that. What we will look at is manually controlling the lamp via our node-RED dashboard and other associated data we display.

Node-RED and the Raspberry Pi




If you are running Raspbian Jessie on your Pi then you should already have node-RED installed. Before starting the node-RED server it is worth installing a few packages that you will need. Type the following at the CLI:

sudo apt-get update
sudo apt-get install npm
cd $HOME/.node-red
npm install node-red-dashboard
npm install node-red-node-snmp
npm install node-red-contrib-os

Node-RED is started by running the following command in the terminal:

node-red-start
Once started, you use a browser (either on the Pi or remotely) to build your applications and configure your dashboard. I used my Macbook Air, to do this point your browser at the ip address of your Pi:1880. If you do it on your Pi, the URL would be 127.0.0.1:1880 or localhost:1880. The associated dashboard URL is <IP Address>:1880/ui. So for example my Raspberry Pi dashboard is at http://192.168.0.18:1880/ui.

Most of the Raspberry Pi information charted in the dashboard shown above is from the node-red-contrib-os package. For example information on the SD Card is from the Drives node. You use this node to query the hard drives. Values for size, used and available are expressed in KB (1024 bytes). Value for capacity is a number between 0 and 1. Capacity*100 is also known as percentage used.

Some of the flows are shown below. The first step is to drag across a timer which you can use to poll the Drive node. Our timer sends a timestamp every minute.

Connect the timer to a Drive node and it will start pumping out messages with the size, used, available and capacity values for every drive on your target system. You can use a Debug node to see messages being sent out by any node. This is very useful in debugging your flows. On the Raspberry Pi there will be a few different file systems on your SD Card so you have to be specific about which area you want information about.


You can add a Function node to include custom JavaScript to process the messages passed between the nodes. The JavaScript used to extract the various Drive information that I use is shown below. The topic variable is used as the name for charts with multiple inputs.

var msg1,msg2,msg3;

if (msg.payload.filesystem === '/dev/root') {

    msg1 = { payload: msg.payload.used };
    msg2 = { payload: msg.payload.available };
    msg3 = { payload: msg.payload.capacity * 100 };

    msg1.topic = "used"
    msg2.topic = "available"
    msg3.topic = "capacity"

}

return [ msg1, msg2, msg3 ];

CPU Temperature




To display CPU temperature we use a different technique. On the Raspberry Pi you can display the current CPU temperature by typing:

/opt/vc/bin/vcgencmd measure_temp
You can use an Exec node to run OS commands. So connect our same timer node to an Exec node and input the command above. We then have to do a bit of processing to extract the temperature as a number. Use another function node with the following code.

msg.payload = msg.payload.replace("temp=","").replace("'C\n","");

return msg;



There are also nodes available for the Sense Hat. You need to use functions similar to those above to extract the various sensor data values.


Controlling GPIO using the Dashboard



Manual control of a GPIO is fairly straight forward. The one trick is that the Switch node outputs true/false and the Raspberry Pi GPIO out node expects a 1/0 input. So we include another Function node to mediate. The relevant code is:

msg.payload = msg.payload ? 1 : 0;

return msg;

Of course our Raspberry Pi outputs 3.3VDC which wont turn on a 240VAC lamp so we use a PowerSwitch Tail kit as an intermediary.





Saturday, April 8, 2017

STEMTera (Arduino Breadboard) Tutorial

What is the STEMTera?



STEMTera was the first project that I have supported on KickStarter and the experience has been overwhelmingly positive. So what is STEMTera?

At its simplest the STEMTera is a breadboard with an embedded Arduino UNO. Most shields will plug straight in. But it is more than just a simple Arduino prototyping platform, it also includes:
  • a LEGO® compatible bottom which allows you to mount it directly on your LEGO creation.
  • An ATmega32U2 microprocessor which is exposed, users can develop native USB projects with an extra 21 IO pins. These extra IO pins can work directly with the LUFA framework. More on this below.
  • Multiple IDE support including Atmel® Studio, Arduino IDE, AVR-GCC, AVR-GCC with LUFA, Scratch, etc.
  • Embedded LED's to indicate Power on, Tx, Rx and and one connected to D13 for your own use.
The Arduino functionality is the same as for an UNO, plug the USB port into your computer and away you go. The ATmega32U2 functionality is new and deserves a bit more explanation.

ATmega32U2


The newer Arduino Uno boards have two programmable microcontrollers: one is ATmega328, which is the Arduino processor that you usually upload your sketches to, and the second is the ATmega16U2, which is flashed to operate as a USB to Serial converter.

The ATmega16U2 chip on the Arduino board acts as a bridge between the computer's USB port and the main processor's serial port. Previous versions of the Uno and Mega2560 had an Atmega8U2. It runs firmware that can be updated through a special USB protocol called DFU (Device Firmware Update).

As part of the STEMTera KickStarter campaign there was a stretch target which if met would result in the ATmega16U2 being upgraded to the ATmega32U2. This target was met and so the upgrade was incorporated into the finished product. Even better, the ATmega32U2 pins have been brought out to the breadboard so that you can utilise them.

By updating the ATmega32U2 firmware, your STEMTera can appear as a different USB device (MIDI controller, HID, etc.).

DFU Programmers




To update the firmware on the STEMTera ATmega32U2 you will need a DFU Programmer.

Windows: Download Atmel's flip programmer.

Mac: Install MacPorts: Once MacPorts is installed, in a Terminal window, type

sudo port install dfu-programmer
NB: If you've never used sudo before, it will ask for your password. Use the password you login to your Mac with. sudo allows you to run commands as the administrator of the computer

Linux: from a command line type

sudo apt-get install dfu-programmer

Enter DFU mode


To enter program (DFU) mode you need to short the ATmega32U2 ICSP reset pin to ground until the red LED starts to flash.

Flash the chip


Windows: use flip to upload the hex file to your board

Mac & Linux: from a terminal window, change directories to get into the folder with the firmware. If you saved the firmware in your downloads folder on OSX, then you might type:

cd Downloads/
Once there, type:

sudo dfu-programmer atmega32u2 erase
When this command is done and you get a command prompt again, say you want to reflash the original Arduino firmware (Arduino-usbserial-uno.hex), then you would type:

sudo dfu-programmer atmega32u2 flash Arduino-usbserial-uno.hex
Finally:

sudo dfu-programmer atmega32u2 reset





Friday, March 17, 2017

Cayenne Competition

Cayenne




We mentioned Cayenne in an earlier post when we were looking for a video web serving solution for the Raspberry Pi. They provide a drag and drop dashboard for your IoT projects.



They have announced a home automation contest so we thought we would give it a try. The judging criteria for the contest is:
  • Interaction of Arduino hardware and Cayenne software with various areas of the home
  • Use of Cayenne’s Triggers & Alerts and Scheduling features
  • Number of devices and sensors connected
  • Real world practicality and usability

You have to use Cayenne obviously and need to include at least one Arduino.

Connecting an Arduino to the Cayenne Server


This is pretty well documented for the Arduino and Raspberry Pi but there were a few missing steps in getting the connection script to run on our Mac. There are 3 things you need to configure:
  1. Connect your Arduino to your PC. Open up your Arduino IDE, download the Cayenne Library.
  2. Set up your free Cayenne account. Start a new project and add an Arduino. Copy the sketch for your device and paste it into the IDE. Upload the sketch and run it.
  3. This was the tricky bit for us. You need to run a connection script on your Mac which redirects the Arduino traffic to the Cayenne server. The scripts are located under the extras\scripts folder in the main Arduino library folder. The instruction for Linux and OSX is to run: ./cayenne-ser.sh (may need to run with sudo).

Getting the Connection Script to work on a Mac


First you need to find the script. We got to ours using:

cd Arduino/libraries/Cayenne/extras/scripts
As instructed, we then tried:

./cayenne-ser.sh
But received the error:

-bash: ./cayenne-ser.sh: Permission denied
No problem we thought, we will just use sudo

sudo ./cayenne-ser.sh
Received a new error

sudo: ./cayenne-ser.sh: command not found
That's weird. So we tried:

sudo sh ./cayenne-ser.sh
And received another error, but we were getting closer...

This script uses socat utility, but could not find it.

  Try installing it using: brew install socat
So we gave that a shot but we didn't have Homebrew installed. Homebrew is a package manager for the Mac (similar to apt-get on Raspbian). To install Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Once Homebrew is installed you can use brew to install socat. Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. This is used to get information from the Arduino to the Cayenne server.

brew install socat
Once you have done all that, you can run your connection script again. The script ran but didn't use the correct port. You can direct which port to use with the following flag:

sudo sh cayenne-ser.sh -c /dev/tty.usbmodem1421
Use the port listed in the Arduino IDE under Tools -> Port.

Cayenne Hello World


To test your new dashboard connection to the Arduino, the easiest way is to add a switch widget pointed at digital output 13 (D13). On most UNO variants this is also connected to an LED so toggling that pin will toggle the LED. If you don't have an onboard LED then you can always connect an external LED. Don't forget to use a current limiting resistor if you do.

The beauty of this is that you don't even have to add any code to the Arduino sketch, you can just use the connection sketch provided when you start a new project. For completeness we will include the code below, this is for a USB connection. Don't forget to insert the token for your project.

#include <CayenneSerial.h>

// Cayenne authentication token. This should be obtained from the Cayenne Dashboard.
char token[] = "YOUR_TOKEN_HERE";

void setup()
{
  //Baud rate can be specified by calling Cayenne.begin(token, 9600);
  Cayenne.begin(token);
}

void loop()
{
  Cayenne.run();
}



The setup for your button should look like this:


So apart from a bit of messing about to get the connection script to run, it all works as advertised. We might have a crack at the home automation competition if we can think of something original to do...

HC-SR04 Ultrasonic Sensor Python Class for Raspberry Pi

The HC-SR04




The HC - SR04 ultrasonic ranging module provides 2cm - 400cm non-contact
measurement, with ranging accuracy up to 3mm. The module includes ultrasonic transmitters, receiver and control circuitry. The time difference between transmission and reception of ultrasonic signals is calculated. Using the speed of sound and ‘Speed = Distance/Time‘ equation, the distance between the source and target can be easily calculated.

Credit to Vivek and his article on the same subject for the diagrams.




Wiring the HC-SR04 to a Raspberry Pi


The module has 4 pins:

  • VCC - 5V Supply
  • TRIG - Trigger Pulse Input
  • ECHO - Echo Pulse Output
  • GND - 0V Ground 

Wiring is straight forward with one exception, note that the sensor operates at 5V not the 3.3V of the Raspberry Pi. Connecting the ECHO pulse pin directly to the Raspberry Pi would be a BAD idea and could damage the Pi. We need to use a voltage divider or a logic level converter module to drop the logic level from the HC-SR04 to a maximum of 3.3V. Current draw for the sensor is 15 mA.

As we have a spare logic level converter, we will use that. Connections for the logic converter are shown below.


For the voltage divider option: Vout = Vin x R2/(R1+R2) = 5 x 10000/(4700 + 10000) = 3.4V






Python Class for the HC-SR04 Ultrasonic Sensor



To utilise the HC-SR04:

  1. Provide a trigger signal to TRIG input, it requires a HIGH signal of at least 10μS duration.
  2. This enables the module to transmit eight 40KHz ultrasonic bursts.
  3. If there is an obstacle in-front of the module, it will reflect those ultrasonic waves
  4. If the signal comes back, the ECHO output of the module will be HIGH for a duration of time taken for sending and receiving ultrasonic signals. The pulse width ranges from 150μS to 25mS depending upon the distance of the obstacle from the sensor and it will be about 38ms if there is no obstacle.
  5. Obstacle distance = (high level time × velocity of sound (343.21 m/s at sea level and 20°C) / 2
  6. Allow at least 60 ms between measurements.





Time taken by the pulse is actually for return travel of the ultrasonic signals. Therefore Time is taken as Time/2.

Distance = Speed * Time/2

Speed of sound at sea level = 343.21 m/s or 34321 cm/s

Thus, Distance = 17160.5 * Time (unit cm).

As we are using the ultrasonic sensor with our Raspberry Pi robot, we have created a python class that can be easily imported and used. Note the calibration function which can be used to help correct for things like altitude and temperature.

We have included a simple low pass filter function which is equivalent to an exponentially weighted moving average. This is useful for smoothing the distance values returned from the sensor. The higher the value of beta, the greater the smoothing.

#!/usr/bin/python
# RS_UltraSonic.py - Ultrasonic Distance Sensor Class for the Raspberry Pi 
#
# 15 March 2017 - 1.0 Original Issue
#
# Reefwing Software
# Simplified BSD Licence - see bottom of file.

import RPi.GPIO as GPIO
import os, signal

from time import sleep, time

# Private Attributes
__CALIBRATE      = "1"
__TEST           = "2"
__FILTER         = "3"
__QUIT           = "q"

class UltraSonic():
    # Ultrasonic sensor class 
    
    def __init__(self, TRIG, ECHO, offset = 0.5):
        # Create a new sensor instance
        self.TRIG = TRIG
        self.ECHO = ECHO
        self.offset = offset                             # Sensor calibration factor
        GPIO.setmode(GPIO.BCM)
        GPIO.setup(self.TRIG, GPIO.OUT)                  # Set pin as GPIO output
        GPIO.setup(self.ECHO, GPIO.IN)                   # Set pin as GPIO input

    def __str__(self):
        # Return string representation of sensor
        return "Ultrasonic Sensor: TRIG - {0}, ECHO - {1}, Offset: {2} cm".format(self.TRIG, self.ECHO, self.offset)

    def ping(self):
        # Get distance measurement
        GPIO.output(self.TRIG, GPIO.LOW)                 # Set TRIG LOW
        sleep(0.1)                                       # Min gap between measurements        
        # Create 10 us pulse on TRIG
        GPIO.output(self.TRIG, GPIO.HIGH)                # Set TRIG HIGH
        sleep(0.00001)                                   # Delay 10 us
        GPIO.output(self.TRIG, GPIO.LOW)                 # Set TRIG LOW
        # Measure return echo pulse duration
        while GPIO.input(self.ECHO) == GPIO.LOW:         # Wait until ECHO is LOW
            pulse_start = time()                         # Save pulse start time

        while GPIO.input(self.ECHO) == GPIO.HIGH:        # Wait until ECHO is HIGH
            pulse_end = time()                           # Save pulse end time

        pulse_duration = pulse_end - pulse_start 
        # Distance = 17160.5 * Time (unit cm) at sea level and 20C
        distance = pulse_duration * 17160.5              # Calculate distance
        distance = round(distance, 2)                    # Round to two decimal points

        if distance > 2 and distance < 400:              # Check distance is in sensor range
            distance = distance + self.offset
            print("Distance: ", distance," cm")
        else:
            distance = 0
            print("No obstacle")                         # Nothing detected by sensor
        return distance

    def calibrate(self):
        # Calibrate sensor distance measurement
        while True:
            self.ping()
            response = input("Enter Offset (q = quit): ")
            if response == __QUIT:
                break;
            sensor.offset = float(response)
            print(sensor)
            
    @staticmethod
    def low_pass_filter(value, previous_value, beta):
        # Simple infinite-impulse-response (IIR) single-pole low-pass filter.
        # ß = discrete-time smoothing parameter (determines smoothness). 0 < ß < 1
        # LPF: Y(n) = (1-ß)*Y(n-1) + (ß*X(n))) = Y(n-1) - (ß*(Y(n-1)-X(n)))
        smooth_value = previous_value - (beta * (previous_value - value))
        return smooth_value
        

def main():
    sensor = UltraSonic(8, 7)       # create a new sensor instance on GPIO pins 7 & 8
    print(sensor)

    def endProcess(signum = None, frame = None):
        # Called on process termination. 
        if signum is not None:
            SIGNAL_NAMES_DICT = dict((getattr(signal, n), n) for n in dir(signal) if n.startswith('SIG') and '_' not in n )
            print("signal {} received by process with PID {}".format(SIGNAL_NAMES_DICT[signum], os.getpid()))
        print("\n-- Terminating program --")
        print("Cleaning up GPIO...")
        GPIO.cleanup()
        print("Done.")
        exit(0)

    # Assign handler for process exit
    signal.signal(signal.SIGTERM, endProcess)
    signal.signal(signal.SIGINT, endProcess)
    signal.signal(signal.SIGHUP, endProcess)
    signal.signal(signal.SIGQUIT, endProcess)

    while True:
        action = input("\nSelect Action - (1) Calibrate, (2) Test, or (3) Filter: ")

        if action == __CALIBRATE:
            sensor.calibrate()
        elif action == __FILTER:
            beta = input("Enter Beta 0 < ß < 1 (q = quit): ")
            filtered_value = 0
            if beta == __QUIT:
                break;
            while True:
                filtered_value = sensor.low_pass_filter(sensor.ping(), filtered_value, float(beta))
                filtered_value = round(filtered_value, 2)
                print("Filtered: ", filtered_value, " cm")
        else:
            sensor.ping()

if __name__ == "__main__":
    # execute only if run as a script
    main()

## Copyright (c) 2017, Reefwing Software
## All rights reserved.
##
## Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## 1. Redistributions of source code must retain the above copyright notice, this
##   list of conditions and the following disclaimer.
## 2. Redistributions in binary form must reproduce the above copyright notice,
##   this list of conditions and the following disclaimer in the documentation
##   and/or other materials provided with the distribution.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
## DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
## ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.



Wednesday, March 15, 2017

Controlling the Raspberry Pi via a web browser

Web Controlled Robot





Now that we can stream video to a web page it would be nice to be able to remotely control our robot. To do this we will us the Raspberry Pi to run a web server that serves the page used to control the robot. Once we have this up and running you will be able to drive your robot around using a browser on your laptop via WiFi on your LAN.

As shown in the previous post, you can use the python command print(server) to see what URL you need to point your browser at to see the video and control your robot. The way the controls work is as follows:
  1. Typing the address of your Pi served page (e.g. http://192.168.0.9:8082) into your browser will send a web request to the python program running the server, in our case RS_Server.py.
  2. RS_Server responds with the contents of index.html. Your browser renders this HTML and it appears in your browser.
  3. The broadcasting of video data is handled by the broadcast thread object in RS_Sever. The BroadcastThread class implements a background thread which continually reads encoded MPEG1 data from the background FFmpeg process started by the BroadcastOutput class and broadcasts it to all connected websockets. More detail on this can be found at pistreaming if you are interested. Basically the camera is continually taking photos, converting them to MPEG's and sending them at the frame rate to a canvas in your browser.
  4. You will see below that we have modified the index.html file to display a number of buttons to control our robot. Pressing one of these buttons will send a GET request to the server running on your Pi with a parameter of "command" and the value of the button pressed. We then handle the request by passing on the appropriate command to our MotorControl class. To do this we will need to bring together RS_Server and RS_MotorControl in our new RS_Robot class.

Modifying index.html



The index.html file provided by pistreaming just creates a canvas in which to display our streaming video. To this we will add a table with 9 command control buttons for our robot. You could get away with only 5 (Forward, Back, Left, Right and Stop) but looking ahead we know we will also need 4 more (speed increase, speed decrease, auto and manual). Auto and Manual will toggle between autonomous control and remote control (i.e. via the browser). Associated with each button is a JavaScript script that will send the appropriate command when the button is clicked.

In addition to controlling your robot via the on screen buttons you can use the keyboard. We have mapped the following functionality:

Up Arrow      = Forward
Down Arrow = Back
Left Arrow    = Left
Right Arrow  = Right
Space             = Stop
-                     = Decrease Speed
+                    = Increase Speed
m                   = Manual
a                    = Autonomous

You can modify the index.html to map whatever keybindings you want. Be aware that the keycode returned by different browsers isn't always consistent. You can use the JavaScript Event KeyCode Test Page to find out what key code your browser returns for different keys.

The manual and auto modes don't do anything at this stage. 

The modified index.html file is shown below.

<!DOCTYPE html>
<html>
<head>
    <meta name="viewport" content="width=${WIDTH}, initial-scale=1"/>
    <title>Alexa M</title>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript" charset="utf-8"></script>

    <style>
        .controls {
            width: 150px;
            font-size: 22pt;
            text-align: center;
            padding: 15px;
            background-color: green;
            color: white;
        }
    </style>

    <style type="text/css">
            body {
                background: ${BGCOLOR};
                text-align: center;
                margin-top: 2%;
            }
            #videoCanvas {
                // Always stretch the canvas to 640x480, regardless of its internal size.
                width: ${WIDTH}px;
                height: ${HEIGHT}px;
            }
    </style>

    <script>
    function sendCommand(command)
    {
        $.get('/', {command: command});
    }
    
    function keyPress(event)
    {
        keyCode = event.keyCode;
        
        switch (keyCode) {
            case 38:                // up arrow
                sendCommand('f');
                break;
            case 37:                // left arrow
                sendCommand('l');
                break;
            case 32:                // space
                sendCommand('s');
                break;
            case 39:                // right arrow
                sendCommand('r');
                break;
            case 40:                // down arrow
                sendCommand('b');
                break;
            case 109:               // - = decrease speed
            case 189:
                sendCommand('-');
                break;
            case 107:
            case 187:
                sendCommand('+');   // + = increase speed
                break;
            case 77: 
                sendCommand('m');   // m = manual (remote control)
                break;
            case 65:
                sendCommand('a');   // a = autonomous
                break;
            default: return;        // allow other keys to be handled
        }
        
        // prevent default action (eg. page moving up/down with arrow keys)
        event.preventDefault();
    }
    $(document).keydown(keyPress);
    </script>
</head>

<body>

    <h1><FONT color=white>Alexa M</h1>

    <!-- The Canvas size specified here is the "initial" internal resolution. jsmpeg will
        change this internal resolution to whatever the source provides. The size the
        canvas is displayed on the website is dictated by the CSS style.
    -->
    <canvas id="videoCanvas" width="${WIDTH}" height="${HEIGHT}">
        <p>
            Please use a browser that supports the Canvas Element, like
            <a href="http://www.google.com/chrome">Chrome</a>,
            <a href="http://www.mozilla.com/firefox/">Firefox</a>,
            <a href="http://www.apple.com/safari/">Safari</a> or Internet Explorer 10
        </p>
    </canvas>
    <script type="text/javascript" src="jsmpg.js"></script>
    <script type="text/javascript">
        // Show loading notice
        var canvas = document.getElementById('videoCanvas');
        var ctx = canvas.getContext('2d');
        ctx.fillStyle = '${COLOR}';
        ctx.fillText('Loading...', canvas.width/2-30, canvas.height/3);
        // Setup the WebSocket connection and start the player
        var client = new WebSocket('ws://${ADDRESS}/');
        var player = new jsmpeg(client, {canvas:canvas});
    </script>

    <table align="center">
    <tr><td  class="controls" onClick="sendCommand('-');">-</td>
        <td  class="controls" onClick="sendCommand('f');">Forward</td>
        <td  class="controls" onClick="sendCommand('+');">+</td>
    </tr>
    <tr><td  class="controls" onClick="sendCommand('l');">Left</td>
        <td  class="controls" onClick="sendCommand('s');">Stop</td>
        <td  class="controls" onClick="sendCommand('r');">Right</td>
    </tr>
    <tr><td  class="controls" onClick="sendCommand('m');">Manual</td>
        <td  class="controls" onClick="sendCommand('b');">Back</td>
        <td  class="controls" onClick="sendCommand('a');">Auto</td>
    </tr>
    </table>

</body>
</html>

Python Robot Class


As Alexa M continues to evolve, so too will this robot class. For now we can keep things pretty simple. In addition to creating a robot class we have updated the motor control, servo and server classes. Rather than reproduce all the code, we will provide links to our Gist Repository where you can download the latest versions. For completeness, I will also provide links to the HTML and JavaScript library that you will need. All these files need to be in the same directory.

  1. RS_Robot.py version 1.0 - Run this script on your Pi to create a telepresence rover.
  2. RS_Server.py version 1.1 - Updated to include command parsing.
  3. RS_MotorControl.py version 1.1 - New motor control methods.
  4. RS_Servo.py version version 1.2 - License added.
  5. index.html version 1.0 - The file shown in the previous section.
  6. jsmpg.js - Dominic Szablewski's Javascript-based MPEG1 decoder.
That completes the remote control and video streaming portion of the design. We hope you have as much fun driving around your robot as we do. Next up we will look at battery monitoring and autonomous control of the robot.

Sunday, March 5, 2017

Streaming Video from the Raspberry Pi Camera

Building a Telepresence Robot


When building a robot you quickly work out that you have two choices with regards to controlling it: autonomous or some sort of remote control. We will develop both for Alexa M. We are going with remote control first because we are waiting for our ultrasonic mounting bracket to arrive from China.

As Alexa M has the Raspberry Pi camera fitted it makes sense to stream the video so we can have a view of what the robot is seeing. In effect a simple telepresence rover.

There are many different approaches for providing remote control to a robot (including wired, WiFi, Bluetooth, or RF). We wanted something wireless, with a Python API which could incorporate the video stream with minimal lag. That quickly narrowed things down and we chose control via WiFi.

Robot control via WiFi is pretty straight forward. You use a micro-framework like Bottle or Flask to set up the Pi as a web-server and then you can use your browser to access the associated web page. Well maybe it isn't that straight forward, but at least it is well documented. Streaming video to the same web page turned out to be a bit of a challenge - but not impossible. we were surprised that this wasn't a problem with an obvious solution given the numerous requests on the web for this functionality. The underlying issue seems to be that the Pi's camera outputs raw H.264, and what most browsers want is an MPEG transport stream. Given video was the tricky bit, we used this to decide which framework to use.

Video Streaming - The Options


The following is a list of the options that we came across when searching for a solution. No doubt there are many more, and if there are any we missed then let us know in the comments.
  1. picamera - was our first stop. It is s a pure Python interface to the Raspberry Pi camera module. Perfect! Except it doesn't do streaming. For anything else it is very good.
  2. RPi-Cam-Web-Interface - is a web interface for the Raspberry Pi Camera module that can be opened on any browser (smartphones included). Now we are cooking. Follow the link to install this on your Pi. It works very well, has zero lag and probably has the best video quality of the options we tried. However, server side coding, HTML, CSS and JavaScript are not an area of expertise so we need a pretty idiot proof guide to modding this. I'm sure you could add custom controls to the page served by RPi-Cam-Web-Interface but it wasn't obvious how to do this.
  3. bottle - is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library. The Raspberry Pi forums includes an example of how to stream video using bottle so this was definitely a contender. Electronut Labs provide a simple turn a LED on/off using bottle tutorial as well.
  4. flask - is another lightweight WSGI micro web-framework for Python. It is similar to bottle and you would probably choose flask over bottle if you had a more complicated application (over 1000 lines appears to be the consensus). Miguel has a tutorial on streaming video with flask and there is another guide provided by CCTV camera pros for the Raspberry Pi. Either flask or bottle would get the job done.
  5. Cayenne - helps you build a drag and drop web based dashboard for your IoT applications (i.e. Arduino and Raspberry Pi). It is pretty fancy but it cant do video streaming (yet).
  6. UV4L - was originally conceived as a modular collection of Video4Linux2-compliant, cross-platform drivers. It has evolved over the years and now includes a full-featured Streaming Server component. There is a module for single or dual Raspberry Pi CSI Camera boards but it is command line based and we would prefer a python API. At this stage there are easier options.
  7. pistreaming - provides low latency streaming of the Pi's camera module to any reasonably modern web browser. This is written by the same guy that did the picamera module, all the source code is provided and most importantly it is documented well enough for us to be able to modify the served page to do what we require. The video isn't as good as RPi-Cam-Web-Interface but there is no lag on our LAN. This is the option we ended up using.

PiStreaming


To get the pistreaming solution to work you will need 3 files:
  1. index.html - the html code for the page that you are serving;
  2. server.py - the python code which serves up the video stream; and
  3. jsmpg.js - Dominic Szablewski's Javascript-based MPEG1 decoder.
These can all be cloned from the pistreaming repository. As a first step install the code by following the instructions at pistreaming. Once you have that up and working you can tweak it for your purposes.

RS_Server - a Video Streaming Python Class


To make streaming compatible with our robot class we have turned server.py into a server class. We have made a few other tweaks like inverting the camera since ours is mounted upside down. The print(server) command will display the URL where you can view the stream. The Server class is designed to be imported into another class and usage should be obvious from the class documentation and instructions at pistreaming.



We have also changed the index.html file in preparation for controlling the robot via the website, but we will cover this in a subsequent post.

#!/usr/bin/env python
# RS_Server.py - Web Server Class for the Raspberry Pi
#
# Based on server.py from pistreaming
# ref: https://github.com/waveform80/pistreaming
# Copyright 2014 Dave Hughes <dave@waveform.org.uk>
#
# 06 March 2017 - 1.0 Original Issue
#
# Reefwing Software
# Simplified BSD Licence - see bottom of file.

import sys, io, os, shutil, picamera, signal

from subprocess import Popen, PIPE, check_output
from string import Template
from struct import Struct
from threading import Thread
from time import sleep, time
from http.server import HTTPServer, BaseHTTPRequestHandler
from wsgiref.simple_server import make_server
from ws4py.websocket import WebSocket
from ws4py.server.wsgirefserver import WSGIServer, WebSocketWSGIRequestHandler
from ws4py.server.wsgiutils import WebSocketWSGIApplication

###########################################
# CONFIGURATION
WIDTH = 640
HEIGHT = 480
FRAMERATE = 24
HTTP_PORT = 8082
WS_PORT = 8084
COLOR = u'#444'
BGCOLOR = u'#333'
JSMPEG_MAGIC = b'jsmp'
JSMPEG_HEADER = Struct('>4sHH')
###########################################


class StreamingHttpHandler(BaseHTTPRequestHandler):
    def do_HEAD(self):
        self.do_GET()

    def do_GET(self):
        if self.path == '/':
            self.send_response(301)
            self.send_header('Location', '/index.html')
            self.end_headers()
            return
        elif self.path == '/jsmpg.js':
            content_type = 'application/javascript'
            content = self.server.jsmpg_content
        elif self.path == '/index.html':
            content_type = 'text/html; charset=utf-8'
            tpl = Template(self.server.index_template)
            content = tpl.safe_substitute(dict(
                ADDRESS='%s:%d' % (self.request.getsockname()[0], WS_PORT),
                WIDTH=WIDTH, HEIGHT=HEIGHT, COLOR=COLOR, BGCOLOR=BGCOLOR))
        else:
            self.send_error(404, 'File not found')
            return
        content = content.encode('utf-8')
        self.send_response(200)
        self.send_header('Content-Type', content_type)
        self.send_header('Content-Length', len(content))
        self.send_header('Last-Modified', self.date_time_string(time()))
        self.end_headers()
        if self.command == 'GET':
            self.wfile.write(content)


class StreamingHttpServer(HTTPServer):
    def __init__(self):
        super(StreamingHttpServer, self).__init__(
                ('', HTTP_PORT), StreamingHttpHandler)
        with io.open('index.html', 'r') as f:
            self.index_template = f.read()
        with io.open('jsmpg.js', 'r') as f:
            self.jsmpg_content = f.read()


class StreamingWebSocket(WebSocket):
    def opened(self):
        self.send(JSMPEG_HEADER.pack(JSMPEG_MAGIC, WIDTH, HEIGHT), binary=True)


class BroadcastOutput(object):
    def __init__(self, camera):
        print('Spawning background conversion process')
        self.converter = Popen([
            'avconv',
            '-f', 'rawvideo',
            '-pix_fmt', 'yuv420p',
            '-s', '%dx%d' % camera.resolution,
            '-r', str(float(camera.framerate)),
            '-i', '-',
            '-f', 'mpeg1video',
            '-b', '800k',
            '-r', str(float(camera.framerate)),
            '-'],
            stdin=PIPE, stdout=PIPE, stderr=io.open(os.devnull, 'wb'),
            shell=False, close_fds=True)

    def write(self, b):
        self.converter.stdin.write(b)

    def flush(self):
        print('Waiting for background conversion process to exit')
        self.converter.stdin.close()
        self.converter.wait()


class BroadcastThread(Thread):
    def __init__(self, converter, websocket_server):
        super(BroadcastThread, self).__init__()
        self.converter = converter
        self.websocket_server = websocket_server

    def run(self):
        try:
            while True:
                buf = self.converter.stdout.read(512)
                if buf:
                    self.websocket_server.manager.broadcast(buf, binary=True)
                elif self.converter.poll() is not None:
                    break
        finally:
            self.converter.stdout.close()

class Server():
    def __init__(self):
        # Create a new server instance
        print("Initializing camera")
        self.camera = picamera.PiCamera()
        self.camera.resolution = (WIDTH, HEIGHT)
        self.camera.framerate = FRAMERATE
        # hflip and vflip depends on how you mount the camera
        self.camera.vflip = True
        self.camera.hflip = False 
        sleep(1) # camera warm-up time
        print("Camera ready")

    def __str__(self):
        # Return string representation of server
        ip_addr = check_output(['hostname', '-I']).decode().strip()
        return "Server video stream at http://{}:{}".format(ip_addr, HTTP_PORT)

    def start(self):
        # Start video server streaming
        print('Initializing websockets server on port %d' % WS_PORT)
        self.websocket_server = make_server(
            '', WS_PORT,
            server_class=WSGIServer,
            handler_class=WebSocketWSGIRequestHandler,
            app=WebSocketWSGIApplication(handler_cls=StreamingWebSocket))
        self.websocket_server.initialize_websockets_manager()
        self.websocket_thread = Thread(target=self.websocket_server.serve_forever)
        print('Initializing HTTP server on port %d' % HTTP_PORT)
        self.http_server = StreamingHttpServer()
        self.http_thread = Thread(target=self.http_server.serve_forever)
        print('Initializing broadcast thread')
        output = BroadcastOutput(self.camera)
        self.broadcast_thread = BroadcastThread(output.converter, self.websocket_server)
        print('Starting recording')
        self.camera.start_recording(output, 'yuv')
        print('Starting websockets thread')
        self.websocket_thread.start()
        print('Starting HTTP server thread')
        self.http_thread.start()
        print('Starting broadcast thread')
        self.broadcast_thread.start()
        print("Video Stream available...")
        while True:
            self.camera.wait_recording(1)

    def cleanup(self):
        # Stop video server - close browser tab before calling cleanup
        print('Stopping recording')
        self.camera.stop_recording()
        print('Waiting for broadcast thread to finish')
        self.broadcast_thread.join()
        print('Shutting down HTTP server')
        self.http_server.shutdown()
        print('Shutting down websockets server')
        self.websocket_server.shutdown()
        print('Waiting for HTTP server thread to finish')
        self.http_thread.join()
        print('Waiting for websockets thread to finish')
        self.websocket_thread.join()

def main():
    server = Server()
    print(server)

    def endProcess(signum = None, frame = None):
        # Called on process termination. 
        if signum is not None:
            SIGNAL_NAMES_DICT = dict((getattr(signal, n), n) for n in dir(signal) if n.startswith('SIG') and '_' not in n )
            print("signal {} received by process with PID {}".format(SIGNAL_NAMES_DICT[signum], os.getpid()))
        print("\n-- Terminating program --")
        print("Cleaning up Server...")
        server.cleanup()
        print("Done.")
        exit(0)

    # Assign handler for process exit
    signal.signal(signal.SIGTERM, endProcess)
    signal.signal(signal.SIGINT, endProcess)
    signal.signal(signal.SIGHUP, endProcess)
    signal.signal(signal.SIGQUIT, endProcess)
    
    server.start()
    
            
if __name__ == '__main__':
    main()

## Copyright (c) 2017, Reefwing Software
## All rights reserved.
##
## Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## 1. Redistributions of source code must retain the above copyright notice, this
##   list of conditions and the following disclaimer.
## 2. Redistributions in binary form must reproduce the above copyright notice,
##   this list of conditions and the following disclaimer in the documentation
##   and/or other materials provided with the distribution.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
## DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
## ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.