Friday, March 17, 2017

Cayenne Competition

Cayenne




We mentioned Cayenne in an earlier post when we were looking for a video web serving solution for the Raspberry Pi. They provide a drag and drop dashboard for your IoT projects.



They have announced a home automation contest so we thought we would give it a try. The judging criteria for the contest is:
  • Interaction of Arduino hardware and Cayenne software with various areas of the home
  • Use of Cayenne’s Triggers & Alerts and Scheduling features
  • Number of devices and sensors connected
  • Real world practicality and usability

You have to use Cayenne obviously and need to include at least one Arduino.

Connecting an Arduino to the Cayenne Server


This is pretty well documented for the Arduino and Raspberry Pi but there were a few missing steps in getting the connection script to run on our Mac. There are 3 things you need to configure:
  1. Connect your Arduino to your PC. Open up your Arduino IDE, download the Cayenne Library.
  2. Set up your free Cayenne account. Start a new project and add an Arduino. Copy the sketch for your device and paste it into the IDE. Upload the sketch and run it.
  3. This was the tricky bit for us. You need to run a connection script on your Mac which redirects the Arduino traffic to the Cayenne server. The scripts are located under the extras\scripts folder in the main Arduino library folder. The instruction for Linux and OSX is to run: ./cayenne-ser.sh (may need to run with sudo).

Getting the Connection Script to work on a Mac


First you need to find the script. We got to ours using:

cd Arduino/libraries/Cayenne/extras/scripts
As instructed, we then tried:

./cayenne-ser.sh
But received the error:

-bash: ./cayenne-ser.sh: Permission denied
No problem we thought, we will just use sudo

sudo ./cayenne-ser.sh
Received a new error

sudo: ./cayenne-ser.sh: command not found
That's weird. So we tried:

sudo sh ./cayenne-ser.sh
And received another error, but we were getting closer...

This script uses socat utility, but could not find it.

  Try installing it using: brew install socat
So we gave that a shot but we didn't have Homebrew installed. Homebrew is a package manager for the Mac (similar to apt-get on Raspbian). To install Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Once Homebrew is installed you can use brew to install socat. Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. This is used to get information from the Arduino to the Cayenne server.

brew install socat
Once you have done all that, you can run your connection script again. The script ran but didn't use the correct port. You can direct which port to use with the following flag:

sudo sh cayenne-ser.sh -c /dev/tty.usbmodem1421
Use the port listed in the Arduino IDE under Tools -> Port.

Cayenne Hello World


To test your new dashboard connection to the Arduino, the easiest way is to add a switch widget pointed at digital output 13 (D13). On most UNO variants this is also connected to an LED so toggling that pin will toggle the LED. If you don't have an onboard LED then you can always connect an external LED. Don't forget to use a current limiting resistor if you do.

The beauty of this is that you don't even have to add any code to the Arduino sketch, you can just use the connection sketch provided when you start a new project. For completeness we will include the code below, this is for a USB connection. Don't forget to insert the token for your project.

#include <CayenneSerial.h>

// Cayenne authentication token. This should be obtained from the Cayenne Dashboard.
char token[] = "YOUR_TOKEN_HERE";

void setup()
{
  //Baud rate can be specified by calling Cayenne.begin(token, 9600);
  Cayenne.begin(token);
}

void loop()
{
  Cayenne.run();
}



The setup for your button should look like this:


So apart from a bit of messing about to get the connection script to run, it all works as advertised. We might have a crack at the home automation competition if we can think of something original to do...

HC-SR04 Ultrasonic Sensor Python Class for Raspberry Pi

The HC-SR04




The HC - SR04 ultrasonic ranging module provides 2cm - 400cm non-contact
measurement, with ranging accuracy up to 3mm. The module includes ultrasonic transmitters, receiver and control circuitry. The time difference between transmission and reception of ultrasonic signals is calculated. Using the speed of sound and ‘Speed = Distance/Time‘ equation, the distance between the source and target can be easily calculated.

Credit to Vivek and his article on the same subject for the diagrams.




Wiring the HC-SR04 to a Raspberry Pi


The module has 4 pins:

  • VCC - 5V Supply
  • TRIG - Trigger Pulse Input
  • ECHO - Echo Pulse Output
  • GND - 0V Ground 

Wiring is straight forward with one exception, note that the sensor operates at 5V not the 3.3V of the Raspberry Pi. Connecting the ECHO pulse pin directly to the Raspberry Pi would be a BAD idea and could damage the Pi. We need to use a voltage divider or a logic level converter module to drop the logic level from the HC-SR04 to a maximum of 3.3V. Current draw for the sensor is 15 mA.

As we have a spare logic level converter, we will use that. Connections for the logic converter are shown below.


For the voltage divider option: Vout = Vin x R2/(R1+R2) = 5 x 10000/(4700 + 10000) = 3.4V






Python Class for the HC-SR04 Ultrasonic Sensor



To utilise the HC-SR04:

  1. Provide a trigger signal to TRIG input, it requires a HIGH signal of at least 10μS duration.
  2. This enables the module to transmit eight 40KHz ultrasonic bursts.
  3. If there is an obstacle in-front of the module, it will reflect those ultrasonic waves
  4. If the signal comes back, the ECHO output of the module will be HIGH for a duration of time taken for sending and receiving ultrasonic signals. The pulse width ranges from 150μS to 25mS depending upon the distance of the obstacle from the sensor and it will be about 38ms if there is no obstacle.
  5. Obstacle distance = (high level time × velocity of sound (343.21 m/s at sea level and 20°C) / 2
  6. Allow at least 60 ms between measurements.





Time taken by the pulse is actually for return travel of the ultrasonic signals. Therefore Time is taken as Time/2.

Distance = Speed * Time/2

Speed of sound at sea level = 343.21 m/s or 34321 cm/s

Thus, Distance = 17160.5 * Time (unit cm).

As we are using the ultrasonic sensor with our Raspberry Pi robot, we have created a python class that can be easily imported and used. Note the calibration function which can be used to help correct for things like altitude and temperature.

We have included a simple low pass filter function which is equivalent to an exponentially weighted moving average. This is useful for smoothing the distance values returned from the sensor. The higher the value of beta, the greater the smoothing.

#!/usr/bin/python
# RS_UltraSonic.py - Ultrasonic Distance Sensor Class for the Raspberry Pi 
#
# 15 March 2017 - 1.0 Original Issue
#
# Reefwing Software
# Simplified BSD Licence - see bottom of file.

import RPi.GPIO as GPIO
import os, signal

from time import sleep, time

# Private Attributes
__CALIBRATE      = "1"
__TEST           = "2"
__FILTER         = "3"
__QUIT           = "q"

class UltraSonic():
    # Ultrasonic sensor class 
    
    def __init__(self, TRIG, ECHO, offset = 0.5):
        # Create a new sensor instance
        self.TRIG = TRIG
        self.ECHO = ECHO
        self.offset = offset                             # Sensor calibration factor
        GPIO.setmode(GPIO.BCM)
        GPIO.setup(self.TRIG, GPIO.OUT)                  # Set pin as GPIO output
        GPIO.setup(self.ECHO, GPIO.IN)                   # Set pin as GPIO input

    def __str__(self):
        # Return string representation of sensor
        return "Ultrasonic Sensor: TRIG - {0}, ECHO - {1}, Offset: {2} cm".format(self.TRIG, self.ECHO, self.offset)

    def ping(self):
        # Get distance measurement
        GPIO.output(self.TRIG, GPIO.LOW)                 # Set TRIG LOW
        sleep(0.1)                                       # Min gap between measurements        
        # Create 10 us pulse on TRIG
        GPIO.output(self.TRIG, GPIO.HIGH)                # Set TRIG HIGH
        sleep(0.00001)                                   # Delay 10 us
        GPIO.output(self.TRIG, GPIO.LOW)                 # Set TRIG LOW
        # Measure return echo pulse duration
        while GPIO.input(self.ECHO) == GPIO.LOW:         # Wait until ECHO is LOW
            pulse_start = time()                         # Save pulse start time

        while GPIO.input(self.ECHO) == GPIO.HIGH:        # Wait until ECHO is HIGH
            pulse_end = time()                           # Save pulse end time

        pulse_duration = pulse_end - pulse_start 
        # Distance = 17160.5 * Time (unit cm) at sea level and 20C
        distance = pulse_duration * 17160.5              # Calculate distance
        distance = round(distance, 2)                    # Round to two decimal points

        if distance > 2 and distance < 400:              # Check distance is in sensor range
            distance = distance + self.offset
            print("Distance: ", distance," cm")
        else:
            distance = 0
            print("No obstacle")                         # Nothing detected by sensor
        return distance

    def calibrate(self):
        # Calibrate sensor distance measurement
        while True:
            self.ping()
            response = input("Enter Offset (q = quit): ")
            if response == __QUIT:
                break;
            sensor.offset = float(response)
            print(sensor)
            
    @staticmethod
    def low_pass_filter(value, previous_value, beta):
        # Simple infinite-impulse-response (IIR) single-pole low-pass filter.
        # ß = discrete-time smoothing parameter (determines smoothness). 0 < ß < 1
        # LPF: Y(n) = (1-ß)*Y(n-1) + (ß*X(n))) = Y(n-1) - (ß*(Y(n-1)-X(n)))
        smooth_value = previous_value - (beta * (previous_value - value))
        return smooth_value
        

def main():
    sensor = UltraSonic(8, 7)       # create a new sensor instance on GPIO pins 7 & 8
    print(sensor)

    def endProcess(signum = None, frame = None):
        # Called on process termination. 
        if signum is not None:
            SIGNAL_NAMES_DICT = dict((getattr(signal, n), n) for n in dir(signal) if n.startswith('SIG') and '_' not in n )
            print("signal {} received by process with PID {}".format(SIGNAL_NAMES_DICT[signum], os.getpid()))
        print("\n-- Terminating program --")
        print("Cleaning up GPIO...")
        GPIO.cleanup()
        print("Done.")
        exit(0)

    # Assign handler for process exit
    signal.signal(signal.SIGTERM, endProcess)
    signal.signal(signal.SIGINT, endProcess)
    signal.signal(signal.SIGHUP, endProcess)
    signal.signal(signal.SIGQUIT, endProcess)

    while True:
        action = input("\nSelect Action - (1) Calibrate, (2) Test, or (3) Filter: ")

        if action == __CALIBRATE:
            sensor.calibrate()
        elif action == __FILTER:
            beta = input("Enter Beta 0 < ß < 1 (q = quit): ")
            filtered_value = 0
            if beta == __QUIT:
                break;
            while True:
                filtered_value = sensor.low_pass_filter(sensor.ping(), filtered_value, float(beta))
                filtered_value = round(filtered_value, 2)
                print("Filtered: ", filtered_value, " cm")
        else:
            sensor.ping()

if __name__ == "__main__":
    # execute only if run as a script
    main()

## Copyright (c) 2017, Reefwing Software
## All rights reserved.
##
## Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## 1. Redistributions of source code must retain the above copyright notice, this
##   list of conditions and the following disclaimer.
## 2. Redistributions in binary form must reproduce the above copyright notice,
##   this list of conditions and the following disclaimer in the documentation
##   and/or other materials provided with the distribution.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
## DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
## ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.



Wednesday, March 15, 2017

Controlling the Raspberry Pi via a web browser

Web Controlled Robot





Now that we can stream video to a web page it would be nice to be able to remotely control our robot. To do this we will us the Raspberry Pi to run a web server that serves the page used to control the robot. Once we have this up and running you will be able to drive your robot around using a browser on your laptop via WiFi on your LAN.

As shown in the previous post, you can use the python command print(server) to see what URL you need to point your browser at to see the video and control your robot. The way the controls work is as follows:
  1. Typing the address of your Pi served page (e.g. http://192.168.0.9:8082) into your browser will send a web request to the python program running the server, in our case RS_Server.py.
  2. RS_Server responds with the contents of index.html. Your browser renders this HTML and it appears in your browser.
  3. The broadcasting of video data is handled by the broadcast thread object in RS_Sever. The BroadcastThread class implements a background thread which continually reads encoded MPEG1 data from the background FFmpeg process started by the BroadcastOutput class and broadcasts it to all connected websockets. More detail on this can be found at pistreaming if you are interested. Basically the camera is continually taking photos, converting them to MPEG's and sending them at the frame rate to a canvas in your browser.
  4. You will see below that we have modified the index.html file to display a number of buttons to control our robot. Pressing one of these buttons will send a GET request to the server running on your Pi with a parameter of "command" and the value of the button pressed. We then handle the request by passing on the appropriate command to our MotorControl class. To do this we will need to bring together RS_Server and RS_MotorControl in our new RS_Robot class.

Modifying index.html



The index.html file provided by pistreaming just creates a canvas in which to display our streaming video. To this we will add a table with 9 command control buttons for our robot. You could get away with only 5 (Forward, Back, Left, Right and Stop) but looking ahead we know we will also need 4 more (speed increase, speed decrease, auto and manual). Auto and Manual will toggle between autonomous control and remote control (i.e. via the browser). Associated with each button is a JavaScript script that will send the appropriate command when the button is clicked.

In addition to controlling your robot via the on screen buttons you can use the keyboard. We have mapped the following functionality:

Up Arrow      = Forward
Down Arrow = Back
Left Arrow    = Left
Right Arrow  = Right
Space             = Stop
-                     = Decrease Speed
+                    = Increase Speed
m                   = Manual
a                    = Autonomous

You can modify the index.html to map whatever keybindings you want. Be aware that the keycode returned by different browsers isn't always consistent. You can use the JavaScript Event KeyCode Test Page to find out what key code your browser returns for different keys.

The manual and auto modes don't do anything at this stage. 

The modified index.html file is shown below.

<!DOCTYPE html>
<html>
<head>
    <meta name="viewport" content="width=${WIDTH}, initial-scale=1"/>
    <title>Alexa M</title>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript" charset="utf-8"></script>

    <style>
        .controls {
            width: 150px;
            font-size: 22pt;
            text-align: center;
            padding: 15px;
            background-color: green;
            color: white;
        }
    </style>

    <style type="text/css">
            body {
                background: ${BGCOLOR};
                text-align: center;
                margin-top: 2%;
            }
            #videoCanvas {
                // Always stretch the canvas to 640x480, regardless of its internal size.
                width: ${WIDTH}px;
                height: ${HEIGHT}px;
            }
    </style>

    <script>
    function sendCommand(command)
    {
        $.get('/', {command: command});
    }
    
    function keyPress(event)
    {
        keyCode = event.keyCode;
        
        switch (keyCode) {
            case 38:                // up arrow
                sendCommand('f');
                break;
            case 37:                // left arrow
                sendCommand('l');
                break;
            case 32:                // space
                sendCommand('s');
                break;
            case 39:                // right arrow
                sendCommand('r');
                break;
            case 40:                // down arrow
                sendCommand('b');
                break;
            case 109:               // - = decrease speed
            case 189:
                sendCommand('-');
                break;
            case 107:
            case 187:
                sendCommand('+');   // + = increase speed
                break;
            case 77: 
                sendCommand('m');   // m = manual (remote control)
                break;
            case 65:
                sendCommand('a');   // a = autonomous
                break;
            default: return;        // allow other keys to be handled
        }
        
        // prevent default action (eg. page moving up/down with arrow keys)
        event.preventDefault();
    }
    $(document).keydown(keyPress);
    </script>
</head>

<body>

    <h1><FONT color=white>Alexa M</h1>

    <!-- The Canvas size specified here is the "initial" internal resolution. jsmpeg will
        change this internal resolution to whatever the source provides. The size the
        canvas is displayed on the website is dictated by the CSS style.
    -->
    <canvas id="videoCanvas" width="${WIDTH}" height="${HEIGHT}">
        <p>
            Please use a browser that supports the Canvas Element, like
            <a href="http://www.google.com/chrome">Chrome</a>,
            <a href="http://www.mozilla.com/firefox/">Firefox</a>,
            <a href="http://www.apple.com/safari/">Safari</a> or Internet Explorer 10
        </p>
    </canvas>
    <script type="text/javascript" src="jsmpg.js"></script>
    <script type="text/javascript">
        // Show loading notice
        var canvas = document.getElementById('videoCanvas');
        var ctx = canvas.getContext('2d');
        ctx.fillStyle = '${COLOR}';
        ctx.fillText('Loading...', canvas.width/2-30, canvas.height/3);
        // Setup the WebSocket connection and start the player
        var client = new WebSocket('ws://${ADDRESS}/');
        var player = new jsmpeg(client, {canvas:canvas});
    </script>

    <table align="center">
    <tr><td  class="controls" onClick="sendCommand('-');">-</td>
        <td  class="controls" onClick="sendCommand('f');">Forward</td>
        <td  class="controls" onClick="sendCommand('+');">+</td>
    </tr>
    <tr><td  class="controls" onClick="sendCommand('l');">Left</td>
        <td  class="controls" onClick="sendCommand('s');">Stop</td>
        <td  class="controls" onClick="sendCommand('r');">Right</td>
    </tr>
    <tr><td  class="controls" onClick="sendCommand('m');">Manual</td>
        <td  class="controls" onClick="sendCommand('b');">Back</td>
        <td  class="controls" onClick="sendCommand('a');">Auto</td>
    </tr>
    </table>

</body>
</html>

Python Robot Class


As Alexa M continues to evolve, so too will this robot class. For now we can keep things pretty simple. In addition to creating a robot class we have updated the motor control, servo and server classes. Rather than reproduce all the code, we will provide links to our Gist Repository where you can download the latest versions. For completeness, I will also provide links to the HTML and JavaScript library that you will need. All these files need to be in the same directory.

  1. RS_Robot.py version 1.0 - Run this script on your Pi to create a telepresence rover.
  2. RS_Server.py version 1.1 - Updated to include command parsing.
  3. RS_MotorControl.py version 1.1 - New motor control methods.
  4. RS_Servo.py version version 1.2 - License added.
  5. index.html version 1.0 - The file shown in the previous section.
  6. jsmpg.js - Dominic Szablewski's Javascript-based MPEG1 decoder.
That completes the remote control and video streaming portion of the design. We hope you have as much fun driving around your robot as we do. Next up we will look at battery monitoring and autonomous control of the robot.

Sunday, March 5, 2017

Streaming Video from the Raspberry Pi Camera

Building a Telepresence Robot


When building a robot you quickly work out that you have two choices with regards to controlling it: autonomous or some sort of remote control. We will develop both for Alexa M. We are going with remote control first because we are waiting for our ultrasonic mounting bracket to arrive from China.

As Alexa M has the Raspberry Pi camera fitted it makes sense to stream the video so we can have a view of what the robot is seeing. In effect a simple telepresence rover.

There are many different approaches for providing remote control to a robot (including wired, WiFi, Bluetooth, or RF). We wanted something wireless, with a Python API which could incorporate the video stream with minimal lag. That quickly narrowed things down and we chose control via WiFi.

Robot control via WiFi is pretty straight forward. You use a micro-framework like Bottle or Flask to set up the Pi as a web-server and then you can use your browser to access the associated web page. Well maybe it isn't that straight forward, but at least it is well documented. Streaming video to the same web page turned out to be a bit of a challenge - but not impossible. we were surprised that this wasn't a problem with an obvious solution given the numerous requests on the web for this functionality. The underlying issue seems to be that the Pi's camera outputs raw H.264, and what most browsers want is an MPEG transport stream. Given video was the tricky bit, we used this to decide which framework to use.

Video Streaming - The Options


The following is a list of the options that we came across when searching for a solution. No doubt there are many more, and if there are any we missed then let us know in the comments.
  1. picamera - was our first stop. It is s a pure Python interface to the Raspberry Pi camera module. Perfect! Except it doesn't do streaming. For anything else it is very good.
  2. RPi-Cam-Web-Interface - is a web interface for the Raspberry Pi Camera module that can be opened on any browser (smartphones included). Now we are cooking. Follow the link to install this on your Pi. It works very well, has zero lag and probably has the best video quality of the options we tried. However, server side coding, HTML, CSS and JavaScript are not an area of expertise so we need a pretty idiot proof guide to modding this. I'm sure you could add custom controls to the page served by RPi-Cam-Web-Interface but it wasn't obvious how to do this.
  3. bottle - is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library. The Raspberry Pi forums includes an example of how to stream video using bottle so this was definitely a contender. Electronut Labs provide a simple turn a LED on/off using bottle tutorial as well.
  4. flask - is another lightweight WSGI micro web-framework for Python. It is similar to bottle and you would probably choose flask over bottle if you had a more complicated application (over 1000 lines appears to be the consensus). Miguel has a tutorial on streaming video with flask and there is another guide provided by CCTV camera pros for the Raspberry Pi. Either flask or bottle would get the job done.
  5. Cayenne - helps you build a drag and drop web based dashboard for your IoT applications (i.e. Arduino and Raspberry Pi). It is pretty fancy but it cant do video streaming (yet).
  6. UV4L - was originally conceived as a modular collection of Video4Linux2-compliant, cross-platform drivers. It has evolved over the years and now includes a full-featured Streaming Server component. There is a module for single or dual Raspberry Pi CSI Camera boards but it is command line based and we would prefer a python API. At this stage there are easier options.
  7. pistreaming - provides low latency streaming of the Pi's camera module to any reasonably modern web browser. This is written by the same guy that did the picamera module, all the source code is provided and most importantly it is documented well enough for us to be able to modify the served page to do what we require. The video isn't as good as RPi-Cam-Web-Interface but there is no lag on our LAN. This is the option we ended up using.

PiStreaming


To get the pistreaming solution to work you will need 3 files:
  1. index.html - the html code for the page that you are serving;
  2. server.py - the python code which serves up the video stream; and
  3. jsmpg.js - Dominic Szablewski's Javascript-based MPEG1 decoder.
These can all be cloned from the pistreaming repository. As a first step install the code by following the instructions at pistreaming. Once you have that up and working you can tweak it for your purposes.

RS_Server - a Video Streaming Python Class


To make streaming compatible with our robot class we have turned server.py into a server class. We have made a few other tweaks like inverting the camera since ours is mounted upside down. The print(server) command will display the URL where you can view the stream. The Server class is designed to be imported into another class and usage should be obvious from the class documentation and instructions at pistreaming.



We have also changed the index.html file in preparation for controlling the robot via the website, but we will cover this in a subsequent post.

#!/usr/bin/env python
# RS_Server.py - Web Server Class for the Raspberry Pi
#
# Based on server.py from pistreaming
# ref: https://github.com/waveform80/pistreaming
# Copyright 2014 Dave Hughes <dave@waveform.org.uk>
#
# 06 March 2017 - 1.0 Original Issue
#
# Reefwing Software
# Simplified BSD Licence - see bottom of file.

import sys, io, os, shutil, picamera, signal

from subprocess import Popen, PIPE, check_output
from string import Template
from struct import Struct
from threading import Thread
from time import sleep, time
from http.server import HTTPServer, BaseHTTPRequestHandler
from wsgiref.simple_server import make_server
from ws4py.websocket import WebSocket
from ws4py.server.wsgirefserver import WSGIServer, WebSocketWSGIRequestHandler
from ws4py.server.wsgiutils import WebSocketWSGIApplication

###########################################
# CONFIGURATION
WIDTH = 640
HEIGHT = 480
FRAMERATE = 24
HTTP_PORT = 8082
WS_PORT = 8084
COLOR = u'#444'
BGCOLOR = u'#333'
JSMPEG_MAGIC = b'jsmp'
JSMPEG_HEADER = Struct('>4sHH')
###########################################


class StreamingHttpHandler(BaseHTTPRequestHandler):
    def do_HEAD(self):
        self.do_GET()

    def do_GET(self):
        if self.path == '/':
            self.send_response(301)
            self.send_header('Location', '/index.html')
            self.end_headers()
            return
        elif self.path == '/jsmpg.js':
            content_type = 'application/javascript'
            content = self.server.jsmpg_content
        elif self.path == '/index.html':
            content_type = 'text/html; charset=utf-8'
            tpl = Template(self.server.index_template)
            content = tpl.safe_substitute(dict(
                ADDRESS='%s:%d' % (self.request.getsockname()[0], WS_PORT),
                WIDTH=WIDTH, HEIGHT=HEIGHT, COLOR=COLOR, BGCOLOR=BGCOLOR))
        else:
            self.send_error(404, 'File not found')
            return
        content = content.encode('utf-8')
        self.send_response(200)
        self.send_header('Content-Type', content_type)
        self.send_header('Content-Length', len(content))
        self.send_header('Last-Modified', self.date_time_string(time()))
        self.end_headers()
        if self.command == 'GET':
            self.wfile.write(content)


class StreamingHttpServer(HTTPServer):
    def __init__(self):
        super(StreamingHttpServer, self).__init__(
                ('', HTTP_PORT), StreamingHttpHandler)
        with io.open('index.html', 'r') as f:
            self.index_template = f.read()
        with io.open('jsmpg.js', 'r') as f:
            self.jsmpg_content = f.read()


class StreamingWebSocket(WebSocket):
    def opened(self):
        self.send(JSMPEG_HEADER.pack(JSMPEG_MAGIC, WIDTH, HEIGHT), binary=True)


class BroadcastOutput(object):
    def __init__(self, camera):
        print('Spawning background conversion process')
        self.converter = Popen([
            'avconv',
            '-f', 'rawvideo',
            '-pix_fmt', 'yuv420p',
            '-s', '%dx%d' % camera.resolution,
            '-r', str(float(camera.framerate)),
            '-i', '-',
            '-f', 'mpeg1video',
            '-b', '800k',
            '-r', str(float(camera.framerate)),
            '-'],
            stdin=PIPE, stdout=PIPE, stderr=io.open(os.devnull, 'wb'),
            shell=False, close_fds=True)

    def write(self, b):
        self.converter.stdin.write(b)

    def flush(self):
        print('Waiting for background conversion process to exit')
        self.converter.stdin.close()
        self.converter.wait()


class BroadcastThread(Thread):
    def __init__(self, converter, websocket_server):
        super(BroadcastThread, self).__init__()
        self.converter = converter
        self.websocket_server = websocket_server

    def run(self):
        try:
            while True:
                buf = self.converter.stdout.read(512)
                if buf:
                    self.websocket_server.manager.broadcast(buf, binary=True)
                elif self.converter.poll() is not None:
                    break
        finally:
            self.converter.stdout.close()

class Server():
    def __init__(self):
        # Create a new server instance
        print("Initializing camera")
        self.camera = picamera.PiCamera()
        self.camera.resolution = (WIDTH, HEIGHT)
        self.camera.framerate = FRAMERATE
        # hflip and vflip depends on how you mount the camera
        self.camera.vflip = True
        self.camera.hflip = False 
        sleep(1) # camera warm-up time
        print("Camera ready")

    def __str__(self):
        # Return string representation of server
        ip_addr = check_output(['hostname', '-I']).decode().strip()
        return "Server video stream at http://{}:{}".format(ip_addr, HTTP_PORT)

    def start(self):
        # Start video server streaming
        print('Initializing websockets server on port %d' % WS_PORT)
        self.websocket_server = make_server(
            '', WS_PORT,
            server_class=WSGIServer,
            handler_class=WebSocketWSGIRequestHandler,
            app=WebSocketWSGIApplication(handler_cls=StreamingWebSocket))
        self.websocket_server.initialize_websockets_manager()
        self.websocket_thread = Thread(target=self.websocket_server.serve_forever)
        print('Initializing HTTP server on port %d' % HTTP_PORT)
        self.http_server = StreamingHttpServer()
        self.http_thread = Thread(target=self.http_server.serve_forever)
        print('Initializing broadcast thread')
        output = BroadcastOutput(self.camera)
        self.broadcast_thread = BroadcastThread(output.converter, self.websocket_server)
        print('Starting recording')
        self.camera.start_recording(output, 'yuv')
        print('Starting websockets thread')
        self.websocket_thread.start()
        print('Starting HTTP server thread')
        self.http_thread.start()
        print('Starting broadcast thread')
        self.broadcast_thread.start()
        print("Video Stream available...")
        while True:
            self.camera.wait_recording(1)

    def cleanup(self):
        # Stop video server - close browser tab before calling cleanup
        print('Stopping recording')
        self.camera.stop_recording()
        print('Waiting for broadcast thread to finish')
        self.broadcast_thread.join()
        print('Shutting down HTTP server')
        self.http_server.shutdown()
        print('Shutting down websockets server')
        self.websocket_server.shutdown()
        print('Waiting for HTTP server thread to finish')
        self.http_thread.join()
        print('Waiting for websockets thread to finish')
        self.websocket_thread.join()

def main():
    server = Server()
    print(server)

    def endProcess(signum = None, frame = None):
        # Called on process termination. 
        if signum is not None:
            SIGNAL_NAMES_DICT = dict((getattr(signal, n), n) for n in dir(signal) if n.startswith('SIG') and '_' not in n )
            print("signal {} received by process with PID {}".format(SIGNAL_NAMES_DICT[signum], os.getpid()))
        print("\n-- Terminating program --")
        print("Cleaning up Server...")
        server.cleanup()
        print("Done.")
        exit(0)

    # Assign handler for process exit
    signal.signal(signal.SIGTERM, endProcess)
    signal.signal(signal.SIGINT, endProcess)
    signal.signal(signal.SIGHUP, endProcess)
    signal.signal(signal.SIGQUIT, endProcess)
    
    server.start()
    
            
if __name__ == '__main__':
    main()

## Copyright (c) 2017, Reefwing Software
## All rights reserved.
##
## Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## 1. Redistributions of source code must retain the above copyright notice, this
##   list of conditions and the following disclaimer.
## 2. Redistributions in binary form must reproduce the above copyright notice,
##   this list of conditions and the following disclaimer in the documentation
##   and/or other materials provided with the distribution.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
## DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
## ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Saturday, March 4, 2017

Raspberry Pi Motor Board Python Class


Overview


We have made some additional changes to the Seeed Raspberry Pi Motor Board class. An obvious missing method is a way to change the speed of the motors. You can of course just change the duty attribute but this will only take effect the next time you change direction. So we have added a speed(duty) method. This will assign the new duty cycle and change the duty cycle of any motors which are already moving.

Note that in the Motor() class provided in the previous post:

def Stop():

should be:

def Stop(self):

Motor Control Class


Here is the updated Motor Control Class for the Seeed Motor Board. You many need to change the names of the direction methods as this will be determined by how you have wired your motors to the motor control board.

The MotorState enum class is used to record a history list of commands received. This may be useful when debugging the Robot in autonomous mode.

#!/usr/bin/python
# RS_MotorControl.py - Motor Control Class for the Seeed Raspberry Pi Motor Driver 
# Board v1.0 which uses the Freescale MC33932 dual H-Bridge Power IC.
#
# Based on Seeed Motor() Class 
# ref: http://wiki.seeed.cc/Raspberry_Pi_Motor_Driver_Board_v1.0/
#
# 1 March 2017 - 1.0 Original Issue
#
# Reefwing Software
# Simplified BSD Licence - see bottom of file.

import RPi.GPIO as GPIO
import os, signal

from time import sleep
from enum import Enum, unique
from PiSoftPwm import *

@unique
class MotorState(Enum):
    INIT         = 1
    STOPPED      = 2
    LEFT_FWD     = 3
    RIGHT_FWD    = 4
    BOTH_FWD     = 5
    LEFT_BACK    = 6
    RIGHT_BACK   = 7
    BOTH_BACK    = 8
    CHANGE_SPEED = 9

class MotorControl():
    def __init__(self, base_time=0.01, duty=50):
        # MC33932 pins connected to GPIO
        self.PWMA = 25  
        self.PWMB = 22
        self._IN1 = 23  
        self._IN2 = 24 
        self._IN3 = 17
        self._IN4 = 27

        self.base_time = base_time
        self.duty = duty
        self.history = [MotorState.INIT]

        # Initialize PWMA & PWMB 
        GPIO.setmode(GPIO.BCM)
        GPIO.setup(self.PWMA, GPIO.OUT)
        GPIO.setup(self.PWMB, GPIO.OUT)
        GPIO.output(self.PWMA, True)
        GPIO.output(self.PWMB, True)

        # Initialize Software PWM outputs
        # Left Motor  = OUT_1 and OUT_2
        # Right Motor = OUT_3 and OUT_4
        self.OUT_1  = PiSoftPwm(self.base_time, 100, self._IN1, GPIO.BCM)
        self.OUT_2  = PiSoftPwm(self.base_time, 100, self._IN2, GPIO.BCM)
        self.OUT_3  = PiSoftPwm(self.base_time, 100, self._IN3, GPIO.BCM)
        self.OUT_4  = PiSoftPwm(self.base_time, 100, self._IN4, GPIO.BCM)

        # Start PWM for outputs - nbSlicesOn = 0, i.e. duty cycle = 0
        self.OUT_1.start(0)
        self.OUT_2.start(0)
        self.OUT_3.start(0)
        self.OUT_4.start(0)

    def __str__(self):
        # Return string representation of motor control
        return "Motor Control: base time - {0} seconds, duty - {1}%".format(self.base_time, self.duty)

    def left_back(self):
        self.OUT_1.changeBaseTime(self.base_time)
        self.OUT_2.changeBaseTime(self.base_time)
        self.OUT_1.changeNbSlicesOn(self.duty)
        self.OUT_2.changeNbSlicesOn(0)
        self.history.append(MotorState.LEFT_BACK)

    def left_forward(self):
        self.OUT_1.changeBaseTime(self.base_time)
        self.OUT_2.changeBaseTime(self.base_time)
        self.OUT_1.changeNbSlicesOn(0)
        self.OUT_2.changeNbSlicesOn(self.duty)
        self.history.append(MotorState.LEFT_FWD)

    def right_back(self):
        self.OUT_3.changeBaseTime(self.base_time)
        self.OUT_4.changeBaseTime(self.base_time)
        self.OUT_3.changeNbSlicesOn(0)
        self.OUT_4.changeNbSlicesOn(self.duty)
        self.history.append(MotorState.RIGHT_BACK)

    def right_forward(self):
        self.OUT_3.changeBaseTime(self.base_time)
        self.OUT_4.changeBaseTime(self.base_time)
        self.OUT_3.changeNbSlicesOn(self.duty)
        self.OUT_4.changeNbSlicesOn(0)
        self.history.append(MotorState.RIGHT_FWD)

    def speed(self, duty):
        # Change motor speed to duty (0-100) if not stopped (0)
        self.duty = duty
        self.OUT_1.nbSlicesOn = duty if self.OUT_1.nbSlicesOn else 0
        self.OUT_2.nbSlicesOn = duty if self.OUT_2.nbSlicesOn else 0
        self.OUT_3.nbSlicesOn = duty if self.OUT_3.nbSlicesOn else 0
        self.OUT_4.nbSlicesOn = duty if self.OUT_4.nbSlicesOn else 0
        self.history.append(MotorState.CHANGE_SPEED)

    def stop(self):
        self.OUT_1.changeNbSlicesOn(0)
        self.OUT_2.changeNbSlicesOn(0)
        self.OUT_3.changeNbSlicesOn(0)
        self.OUT_4.changeNbSlicesOn(0)
        self.history.append(MotorState.STOPPED)
        
    def cleanup(self):
        # Stop PWM on all outputs
        self.OUT_1.stop()
        self.OUT_2.stop()
        self.OUT_3.stop()
        self.OUT_4.stop()

def main():
    motor_control = MotorControl()    # create a new motor control instance
    print(motor_control)

    def endProcess(signum = None, frame = None):
        # Called on process termination. Stop motor control PWM
        if signum is not None:
            SIGNALS_NAMES_DICT = dict((getattr(signal, n), n) for n in dir(signal) if n.startswith('SIG') and '_' not in n )
            print("signal {} received by process with PID {}".format(SIGNALS_NAMES_DICT[signum], os.getpid()))
        print("\n-- Terminating program --")
        print("Cleaning up motor control PWM and GPIO...")
        motor_control.cleanup()
        GPIO.cleanup()
        print("Done.")
        exit(0)

    # Assign handler for process exit
    signal.signal(signal.SIGTERM, endProcess)
    signal.signal(signal.SIGINT, endProcess)
    signal.signal(signal.SIGHUP, endProcess)
    signal.signal(signal.SIGQUIT, endProcess)

    while True:
        print('Testing motors...')
        motor_control.left_forward()
        sleep(1)
        motor_control.left_back()
        sleep(1)
        motor_control.right_forward()
        sleep(1)
        motor_control.right_back()
        sleep(1)
        # speed = int(input("Enter Speed (0-100, CTRL c to quit): "))
        # motor_control.speed(speed)
        
if __name__ == "__main__":
    # execute only if run as a script
    main()

## Copyright (c) 2017, Reefwing Software
## All rights reserved.
##
## Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## 1. Redistributions of source code must retain the above copyright notice, this
##   list of conditions and the following disclaimer.
## 2. Redistributions in binary form must reproduce the above copyright notice,
##   this list of conditions and the following disclaimer in the documentation
##   and/or other materials provided with the distribution.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
## DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
## ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.