Friday, July 21, 2017

pcDuino and the Adafruit 5" LCD HDMI Display




After my inability to get the 7" LVDS display to work with the pcDuino I thought I would have a go with the 5" Adafruit display that I use with my Raspberry Pi. While not plug and play, it is significantly easier than the LVDS.

The Adafruit 5" 800x480 HDMI Backpack




This backpack features the TFP401 for decoding video, and for the touch version, an AR1100 USB resistive touch screen driver. Connecting the pcDuino to the screen is easy, just plug a HDMI cable into both.

You can power the display from a USB port on your pcDuino/Raspberry Pi but it is probably better to have a separate supply. Particularly for the pcDuino since it only has one USB port which you probably want to use for the keyboard and mouse.

With the default 5" 800x480 display and 50mA backlight current, the current draw is 500mA total. You can reduce that down to 370mA by running the backlight at half-brightness (25mA).

The gotcha with this display is that the TFP401 decoder chip does not contain a video scaler, it will not resize/shrink video! This means that you must feed it a resolution of 800x480 @ 60Hz or you wont see anything!

Forcing the resolution of LightDM


LightDM is the display manager running in Ubuntu. fortunately it is fairly straight forward to set the resolution used by LightDM. To find out the name of your display you can type xrandr into terminal. This indicated that my display was called LCD. Note you can't do this via SSH, you need to be on the pcDuino with the screen attached.

Next you need to find out the parameters for your screen so that we can add a new display mode. Type the following into a terminal screen.

$cvt 800 480
Then copy what comes after modeline. It should look something like: "800x480_60.00"   29.50  800 824 896 992  480 483 493 500 -hsync +vsync. To create and add a new display mode you then type:

$xrandr --newmode "800x480_60.00"   29.50  800 824 896 992  480 483 493 500 -hsync +vsync
xrandr --addmode LCD 800x480_60.00
You can then make sure that the new mode works:

$xrandr --output LCD --mode 800x480_60.00

With a bit of luck LightDM will now fill your LCD screen.

To ensure that this is done every time the pcDuino boots, you can add it to the profile bash script we attempted to use to load the touch driver in the last post.

ubuntu@ubuntu:~$ sudo nano /etc/profile
Add the following lines to the end of the file, save, exit and reboot.

# Set resolution for the Adafruit 5" Screen
xrandr --newmode "800x480_60.00"   29.50  800 824 896 992  480 483 493 500 -hsync +vsync
xrandr --addmode LCD 800x480_60.00
xrandr --output LCD --mode 800x480_60.00



Saturday, July 15, 2017

pcDuino and the 7" LVDS LCD Touch Screen

Diyode Magazine




A new electronics magazine recently launched in Australia called Diyode (a mashup of DIY and diode???). For a number of years the only local mag has been Silicon Chip so I welcome another entrant. Based on issue 1 it looks like Diyode is pitched at people who don't have quite as much experience as Silicon Chip readers and has more of a bias towards microprocessor based projects. With a sample size of one it is dangerous drawing too many conclusions, you can draw any line through a single point. Anyway the more the merrier and vive la difference, I say!

It will be interesting to see if the Australian market can sustain two electronics magazines. It has in the past, the peak was I think in the 1980's when there was Electronics Today International, Australian Electronics Monthly, Talking Electronics, Electronics Australia, and Silicon Chip. However by the late '80's Silicon Chip was the last mag standing. It is a tough time to be a traditional magazine publisher.

The reason for the history lesson is that as part of the launch, Diyode ran a competition for people to promote the mag on social media and I was one of the lucky winners. The prize was provided by Jaycar and was a pcDuino and its associated 7" LVDS LCD Touch Screen.  Thank you Diyode Magazine and Jaycar.

I have long been tempted to give the pcDuino a try but the cost has been a barrier given you can get similar functionality with a Raspberry Pi and Arduino. I have a project in mind for the pcDuino which I will cover in due course, but first I need to get the LCD screen working. This is a lot harder than it should be, using a HDMI screen is certainly the easier path. If Jaycar are selling the pcDuino and LCD as a package they should flash the pcDuino with the correct drivers so that it works out of the box. I thought I would document my attempts to get the LCD screen working to save people some trouble if they purchase the same kit.

pcDuino vs Raspberry Pi 3




How does the pcDuino stack up against the Raspberry Pi 3 and Arduino? You can see the specifications in the table below. Generally the Pi 3 is better spec'ed than the pcDuino except for Analogue & PWM inputs.

As usual the answer to which is better will depend on what you are trying to do. The big selling point of the pcDuino is the built in "Arduino", however note that there is no Atmega microcontroller on board, so the Arduino functionality is emulated and I suspect this will cause issue with some (if not most) of the available libraries. I haven't tested this yet but I don't expect to be able to just drop in my exisiting Arduino code and have it work.


pcDuino v3B Raspberry Pi 3
CPU:  1 GHz ARM Cortex A7 Dual Core  1.2 GHz Quad-core 64-bit ARM Cortex A53
SoC:  All Winner A20 SoC  Broadcom BCM2837
GPU:   Mali 400 Dual Core Broadcom VideoCore IV @ 400 MHz
RAM:  1GB  1 GB LPDDR2-900 SDRAM
Storage:  4GB on board flash  microSD
Storage Expansion:  microSD slot, SATA port.  USB
Ethernet:  10/100/1000  10/100 MBPS Ethernet
Wi-Fi / Bluetooth:  802.11b/g/n  802.11n Wireless LAN, Bluetooth 4.0
GPIO pins:   14  40
ADC pins:   6  0
PWM pins:    2  1
Communication:  SPI, I2C, UART  SPI, I2C, UART 
USB:  1 x Host + 1 x OTG  4 x USB
Video Output:  HDMI, LVDS HDMI, Display Serial Interface (DSI), Composite
Analog Audio:  3.5mm stereo audio socket  3.5mm stereo audio socket 
Digital Audio:  Yes, via I2C  I2S
Default OS:  Ubuntu Linux  Raspbian
Power Supply:  5VDC 2000mA (via Micro USB) 5VDC 2500mA (via Micro USB)
Dimensions:  121(L) x 65(W) x15(H)mm 85.6 x 56.5 x 17 mm


The LVDS LCD 1024 x 600 Touchscreen



This is a custom made 7" LVDS colour LCD with capacitive touch for pcDuino3 (XC-4350). It has a resolution of 1024 x 600, and comes with an LVDS screen with driver board, a ribbon cable and 10 pieces of male to female jumper wires.

Low-voltage differential signalling, or LVDS, also known as TIA/EIA-644, is a technical standard that specifies electrical characteristics of a differential, serial communications protocol. LVDS operates at low power and can run at very high speeds using inexpensive twisted-pair copper cables.

• Resolution: 1024 x 600
• 5V powered via pcDuino board
• Overall dimensions: 167(L) x 107(W) x 10(D)mm

Connecting the pcDuino and the LVDS LCD Screen




Unfortunately this isn't as simple as you might hope. The first trick is connecting the ribbon cable to the LVDS connection on the pcDuino. Note that you have to pull out the dark grey part of the connector before you can insert the cable, you then push it back in to hold it in place. This isn't mentioned in the instructions and took me a while to figure out.

You can then connect the 10 jumper wires as instructed. These control the touch portion and supplies power.

• Pin 1 of the LCD driver breakout board —> 5v of pcDuino
• Pin 2 of the LCD driver breakout board —> GND of pcDuino
• Pin 3 of the LCD driver breakout board –> D2 of pcDuino
• Pin 4 of the LCD driver breakout board –> D3 of pcDuino
• Pin 5 of the LCD driver breakout board –> D4 of pcDuino
• Pin 6 of the LCD driver breakout board –> GND of pcDuino
• Pin 7 of the LCD driver breakout board –> D9 of pcDuino
• Pin 8 of the LCD driver breakout board –> SCL of pcDuino
• Pin 9 of the LCD driver breakout board –> SDA of pcDuino
• Pin 10 of the LCD driver breakout board –> D8 of pcDuino

Loading the LVDS Driver - Attempt 1


The instructions are wrong. They say to copy ft5x_ts.ko from the pcDuino GitHub repository onto the pcDuino and then run:

$insmod ft5x_tx.k
Obviously this doesn't work because the file doesn't exist. Unfortunately, even running:

$insmod ft5x_ts.ko
didn't work for me.

Loading the LVDS Driver - Attempt 2


Whipping out Dr Google will point you to a number of videos by Jingfeng Liu who I assume works for Link Sprite the manufacturer of the pcDuino. They include:
  1. Install touch driver for LVDS LCD on pcDuino3;
  2. 1024x600 LVDS LCD on pcDuino3;
  3. pcDuino3B with 1204x600 LVDS; and
  4. Flash pcDuino3 with LVDS image.
Three out of four of these solutions involve re-flashing the kernel and then reloading Ubuntu which sounded like a lot of work so I tried option 1 first (installing the touch driver). To do this you will need to attach your pcDuino to a HDMI screen, keyboard and a mouse. You can follow along with the video, but in summary the instructions are as follows:

a) Boot up your pcDuino connected to a HDMI display and open LXTerminal. Install nano (a text editor) by typing:

$sudo apt-get install nano
b) Then install git so that you can clone the files you need from the pcDuino git repository.

$sudo apt-get install git-core
c) Copy the files you need using:

$git clone git://github.com/pcduino/modules
d) This will download more files than you need but some of the other ones look interesting and I may use them in a later project. If you type "ls" you will see a new directory called modules. Have a look at the contents if you want, but for now we need to make the location of the touch driver file our current working directory. To do this type:

$cd modules/touch/ft5x
e) Type "ls" and you should see the file ft5x_ts.ko, yes the same file we downloaded in attempt 1! As for attempt 1, you are instructed to load the driver using:

$sudo insmod ft5x_ts.ko
Unfortunately this still doesn't work and generates the error:

$insmod: error inserting 'ft5x_ts.ko': -1 File exists
In the video you are instructed to check that the driver has loaded using:

$sudo dmesg | tail
Doing this confirms that the driver hasn't been loaded *sigh*. Based on the comments on this video and the LinkSprite forum, I am not the only one to have this issue.

The remainder of the video deals with placing the insmod command in the etc/profile bash file so that the touch driver gets loaded every time the pcDuino boots. I did all this as well and rebooted (just in case a miracle occurred) but it still didn't work.

Loading the LVDS Driver - Attempt 3




Back to the drawing board. I guess we are doing it the hard way. I used the kernel and Ubuntu distribution from the pcDuino 3b download area (since my pcDuino is a version 3b - there is a separate area for the pcDuino 3).  Most of the instructions assume that you are using a Windows machine to create the microSD card image. If you have a Mac, the process is as follows:
  1. Format the SD Card using the SDFormatter app.
  2. In Terminal, get the name of the SD Device using "diskutil list". Mine was /dev/disk2.
  3. Unmount (not eject) the SD Card using the Disk Utility App. Select the SD Device and then click on umount - see image above.
  4. Copy the kernel to the SD Card by typing in Terminal:
$sudo dd if=~/Desktop/pcduino3b_lvds_a20_kernel_livesuit_20150314.img of=/dev/disk2
I then followed the instructions from the "pcDuino3B with 1204x600 LVDS" video but couldn't get the pcDuino to flash the new kernel. The LED never did the expected slow flash to indicate the new kernel being loaded. I tried redoing the SD Card but this didn't fix it.

The problem may be that you need to use LiveSuite or PhoenixCard to burn the SD Card, so I gave that a crack but it is only available for Windows.

I have decided that it is all too hard and will just use my HDMI screen instead!



Monday, April 24, 2017

Node Red Dashboard for Raspberry Pi

What is Node Red?




Node-RED is a programming tool for wiring together hardware devices, APIs and online services. It was developed as a visual programming tool for the Internet of Things. It also allows you to produce and publish a funky web based dashboard with one click.


Node-RED includes a browser-based editor that makes it easy to wire together flows using the selection of nodes in the side menu. Flows can be then be deployed to run in a single-click. JavaScript functions can be created within the editor to customise the messages passed between nodes. A built-in library allows you to save useful functions, templates or flows for re-use.

The light-weight runtime is built on Node.js, taking full advantage of its event-driven, non-blocking model. This makes it ideal to run on low-cost hardware such as the Raspberry Pi as well as in the cloud.

Nodes can be anything from a timer to trigger events to a Raspberry Pi GPIO output used to turn on a LED (or salt lamp in our example). With over 225,000 modules in Node's package repository, it is easy to extend the range of nodes to add new capabilities. As we will demonstrate there are packages available for the Raspberry Pi and Sense Hat. The flows created in Node-RED are stored using JSON.

Node-RED was developed by IBM and in 2016, they contributed Node-RED as an open source JS Foundation project.

The Himalayan Salt Lamp Project




My wife likes salt lamps. Salt lamps allegedly remove dust, pollen, cigarette smoke, and other contaminants from the air. How effective this is I don't know and it is really irrelevant, as I said my wife likes them! Salt is very hygroscopic, that is it absorbs water - this is the basis of the claimed health benefits, the salt also absorbs any foreign particles the water may be carrying. The water then evaporates when the lamp is switched on leaving the contaminants behind entrapped in the salt.

Salt is so hygroscopic that it readily dissolves in the water it absorbs: this property is called deliquescence. It is of course a problem if your expensive Himalayan salt lamp dissolves into a puddle of salty water, especially if it is connected to 240VAC. In our house this melting process starts at relative humidities above 70%.

The solution is to turn your lamp on if humidity gets above 70%. This seemed like a good excuse to introduce the start of our home automation hub and learn about node_RED. Turning a lamp on and off based on humidity and time (lamp goes on at 5pm and off at 10pm) is trivial using Python so we wont cover that. What we will look at is manually controlling the lamp via our node-RED dashboard and other associated data we display.

Node-RED and the Raspberry Pi




If you are running Raspbian Jessie on your Pi then you should already have node-RED installed. Before starting the node-RED server it is worth installing a few packages that you will need. Type the following at the CLI:

sudo apt-get update
sudo apt-get install npm
cd $HOME/.node-red
npm install node-red-dashboard
npm install node-red-node-snmp
npm install node-red-contrib-os

Node-RED is started by running the following command in the terminal:

node-red-start
Once started, you use a browser (either on the Pi or remotely) to build your applications and configure your dashboard. I used my Macbook Air, to do this point your browser at the ip address of your Pi:1880. If you do it on your Pi, the URL would be 127.0.0.1:1880 or localhost:1880. The associated dashboard URL is <IP Address>:1880/ui. So for example my Raspberry Pi dashboard is at http://192.168.0.18:1880/ui.

Most of the Raspberry Pi information charted in the dashboard shown above is from the node-red-contrib-os package. For example information on the SD Card is from the Drives node. You use this node to query the hard drives. Values for size, used and available are expressed in KB (1024 bytes). Value for capacity is a number between 0 and 1. Capacity*100 is also known as percentage used.

Some of the flows are shown below. The first step is to drag across a timer which you can use to poll the Drive node. Our timer sends a timestamp every minute.

Connect the timer to a Drive node and it will start pumping out messages with the size, used, available and capacity values for every drive on your target system. You can use a Debug node to see messages being sent out by any node. This is very useful in debugging your flows. On the Raspberry Pi there will be a few different file systems on your SD Card so you have to be specific about which area you want information about.


You can add a Function node to include custom JavaScript to process the messages passed between the nodes. The JavaScript used to extract the various Drive information that I use is shown below. The topic variable is used as the name for charts with multiple inputs.

var msg1,msg2,msg3;

if (msg.payload.filesystem === '/dev/root') {

    msg1 = { payload: msg.payload.used };
    msg2 = { payload: msg.payload.available };
    msg3 = { payload: msg.payload.capacity * 100 };

    msg1.topic = "used"
    msg2.topic = "available"
    msg3.topic = "capacity"

}

return [ msg1, msg2, msg3 ];

CPU Temperature




To display CPU temperature we use a different technique. On the Raspberry Pi you can display the current CPU temperature by typing:

/opt/vc/bin/vcgencmd measure_temp
You can use an Exec node to run OS commands. So connect our same timer node to an Exec node and input the command above. We then have to do a bit of processing to extract the temperature as a number. Use another function node with the following code.

msg.payload = msg.payload.replace("temp=","").replace("'C\n","");

return msg;



There are also nodes available for the Sense Hat. You need to use functions similar to those above to extract the various sensor data values.


Controlling GPIO using the Dashboard



Manual control of a GPIO is fairly straight forward. The one trick is that the Switch node outputs true/false and the Raspberry Pi GPIO out node expects a 1/0 input. So we include another Function node to mediate. The relevant code is:

msg.payload = msg.payload ? 1 : 0;

return msg;

Of course our Raspberry Pi outputs 3.3VDC which wont turn on a 240VAC lamp so we use a PowerSwitch Tail kit as an intermediary.





Saturday, April 8, 2017

STEMTera (Arduino Breadboard) Tutorial

What is the STEMTera?



STEMTera was the first project that I have supported on KickStarter and the experience has been overwhelmingly positive. So what is STEMTera?

At its simplest the STEMTera is a breadboard with an embedded Arduino UNO. Most shields will plug straight in. But it is more than just a simple Arduino prototyping platform, it also includes:
  • a LEGO® compatible bottom which allows you to mount it directly on your LEGO creation.
  • An ATmega32U2 microprocessor which is exposed, users can develop native USB projects with an extra 21 IO pins. These extra IO pins can work directly with the LUFA framework. More on this below.
  • Multiple IDE support including Atmel® Studio, Arduino IDE, AVR-GCC, AVR-GCC with LUFA, Scratch, etc.
  • Embedded LED's to indicate Power on, Tx, Rx and and one connected to D13 for your own use.
The Arduino functionality is the same as for an UNO, plug the USB port into your computer and away you go. The ATmega32U2 functionality is new and deserves a bit more explanation.

ATmega32U2


The newer Arduino Uno boards have two programmable microcontrollers: one is ATmega328, which is the Arduino processor that you usually upload your sketches to, and the second is the ATmega16U2, which is flashed to operate as a USB to Serial converter.

The ATmega16U2 chip on the Arduino board acts as a bridge between the computer's USB port and the main processor's serial port. Previous versions of the Uno and Mega2560 had an Atmega8U2. It runs firmware that can be updated through a special USB protocol called DFU (Device Firmware Update).

As part of the STEMTera KickStarter campaign there was a stretch target which if met would result in the ATmega16U2 being upgraded to the ATmega32U2. This target was met and so the upgrade was incorporated into the finished product. Even better, the ATmega32U2 pins have been brought out to the breadboard so that you can utilise them.

By updating the ATmega32U2 firmware, your STEMTera can appear as a different USB device (MIDI controller, HID, etc.).

DFU Programmers




To update the firmware on the STEMTera ATmega32U2 you will need a DFU Programmer.

Windows: Download Atmel's flip programmer.

Mac: Install MacPorts: Once MacPorts is installed, in a Terminal window, type

sudo port install dfu-programmer
NB: If you've never used sudo before, it will ask for your password. Use the password you login to your Mac with. sudo allows you to run commands as the administrator of the computer

Linux: from a command line type

sudo apt-get install dfu-programmer

Enter DFU mode


To enter program (DFU) mode you need to short the ATmega32U2 ICSP reset pin to ground until the red LED starts to flash.

Flash the chip


Windows: use flip to upload the hex file to your board

Mac & Linux: from a terminal window, change directories to get into the folder with the firmware. If you saved the firmware in your downloads folder on OSX, then you might type:

cd Downloads/
Once there, type:

sudo dfu-programmer atmega32u2 erase
When this command is done and you get a command prompt again, say you want to reflash the original Arduino firmware (Arduino-usbserial-uno.hex), then you would type:

sudo dfu-programmer atmega32u2 flash Arduino-usbserial-uno.hex
Finally:

sudo dfu-programmer atmega32u2 reset





Friday, March 17, 2017

Cayenne Competition

Cayenne




We mentioned Cayenne in an earlier post when we were looking for a video web serving solution for the Raspberry Pi. They provide a drag and drop dashboard for your IoT projects.



They have announced a home automation contest so we thought we would give it a try. The judging criteria for the contest is:
  • Interaction of Arduino hardware and Cayenne software with various areas of the home
  • Use of Cayenne’s Triggers & Alerts and Scheduling features
  • Number of devices and sensors connected
  • Real world practicality and usability

You have to use Cayenne obviously and need to include at least one Arduino.

Connecting an Arduino to the Cayenne Server


This is pretty well documented for the Arduino and Raspberry Pi but there were a few missing steps in getting the connection script to run on our Mac. There are 3 things you need to configure:
  1. Connect your Arduino to your PC. Open up your Arduino IDE, download the Cayenne Library.
  2. Set up your free Cayenne account. Start a new project and add an Arduino. Copy the sketch for your device and paste it into the IDE. Upload the sketch and run it.
  3. This was the tricky bit for us. You need to run a connection script on your Mac which redirects the Arduino traffic to the Cayenne server. The scripts are located under the extras\scripts folder in the main Arduino library folder. The instruction for Linux and OSX is to run: ./cayenne-ser.sh (may need to run with sudo).

Getting the Connection Script to work on a Mac


First you need to find the script. We got to ours using:

cd Arduino/libraries/Cayenne/extras/scripts
As instructed, we then tried:

./cayenne-ser.sh
But received the error:

-bash: ./cayenne-ser.sh: Permission denied
No problem we thought, we will just use sudo

sudo ./cayenne-ser.sh
Received a new error

sudo: ./cayenne-ser.sh: command not found
That's weird. So we tried:

sudo sh ./cayenne-ser.sh
And received another error, but we were getting closer...

This script uses socat utility, but could not find it.

  Try installing it using: brew install socat
So we gave that a shot but we didn't have Homebrew installed. Homebrew is a package manager for the Mac (similar to apt-get on Raspbian). To install Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Once Homebrew is installed you can use brew to install socat. Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. This is used to get information from the Arduino to the Cayenne server.

brew install socat
Once you have done all that, you can run your connection script again. The script ran but didn't use the correct port. You can direct which port to use with the following flag:

sudo sh cayenne-ser.sh -c /dev/tty.usbmodem1421
Use the port listed in the Arduino IDE under Tools -> Port.

Cayenne Hello World


To test your new dashboard connection to the Arduino, the easiest way is to add a switch widget pointed at digital output 13 (D13). On most UNO variants this is also connected to an LED so toggling that pin will toggle the LED. If you don't have an onboard LED then you can always connect an external LED. Don't forget to use a current limiting resistor if you do.

The beauty of this is that you don't even have to add any code to the Arduino sketch, you can just use the connection sketch provided when you start a new project. For completeness we will include the code below, this is for a USB connection. Don't forget to insert the token for your project.

#include <CayenneSerial.h>

// Cayenne authentication token. This should be obtained from the Cayenne Dashboard.
char token[] = "YOUR_TOKEN_HERE";

void setup()
{
  //Baud rate can be specified by calling Cayenne.begin(token, 9600);
  Cayenne.begin(token);
}

void loop()
{
  Cayenne.run();
}



The setup for your button should look like this:


So apart from a bit of messing about to get the connection script to run, it all works as advertised. We might have a crack at the home automation competition if we can think of something original to do...

HC-SR04 Ultrasonic Sensor Python Class for Raspberry Pi

The HC-SR04




The HC - SR04 ultrasonic ranging module provides 2cm - 400cm non-contact
measurement, with ranging accuracy up to 3mm. The module includes ultrasonic transmitters, receiver and control circuitry. The time difference between transmission and reception of ultrasonic signals is calculated. Using the speed of sound and ‘Speed = Distance/Time‘ equation, the distance between the source and target can be easily calculated.

Credit to Vivek and his article on the same subject for the diagrams.




Wiring the HC-SR04 to a Raspberry Pi


The module has 4 pins:

  • VCC - 5V Supply
  • TRIG - Trigger Pulse Input
  • ECHO - Echo Pulse Output
  • GND - 0V Ground 

Wiring is straight forward with one exception, note that the sensor operates at 5V not the 3.3V of the Raspberry Pi. Connecting the ECHO pulse pin directly to the Raspberry Pi would be a BAD idea and could damage the Pi. We need to use a voltage divider or a logic level converter module to drop the logic level from the HC-SR04 to a maximum of 3.3V. Current draw for the sensor is 15 mA.

As we have a spare logic level converter, we will use that. Connections for the logic converter are shown below.


For the voltage divider option: Vout = Vin x R2/(R1+R2) = 5 x 10000/(4700 + 10000) = 3.4V






Python Class for the HC-SR04 Ultrasonic Sensor



To utilise the HC-SR04:

  1. Provide a trigger signal to TRIG input, it requires a HIGH signal of at least 10μS duration.
  2. This enables the module to transmit eight 40KHz ultrasonic bursts.
  3. If there is an obstacle in-front of the module, it will reflect those ultrasonic waves
  4. If the signal comes back, the ECHO output of the module will be HIGH for a duration of time taken for sending and receiving ultrasonic signals. The pulse width ranges from 150μS to 25mS depending upon the distance of the obstacle from the sensor and it will be about 38ms if there is no obstacle.
  5. Obstacle distance = (high level time × velocity of sound (343.21 m/s at sea level and 20°C) / 2
  6. Allow at least 60 ms between measurements.





Time taken by the pulse is actually for return travel of the ultrasonic signals. Therefore Time is taken as Time/2.

Distance = Speed * Time/2

Speed of sound at sea level = 343.21 m/s or 34321 cm/s

Thus, Distance = 17160.5 * Time (unit cm).

As we are using the ultrasonic sensor with our Raspberry Pi robot, we have created a python class that can be easily imported and used. Note the calibration function which can be used to help correct for things like altitude and temperature.

We have included a simple low pass filter function which is equivalent to an exponentially weighted moving average. This is useful for smoothing the distance values returned from the sensor. The higher the value of beta, the greater the smoothing.

#!/usr/bin/python
# RS_UltraSonic.py - Ultrasonic Distance Sensor Class for the Raspberry Pi 
#
# 15 March 2017 - 1.0 Original Issue
#
# Reefwing Software
# Simplified BSD Licence - see bottom of file.

import RPi.GPIO as GPIO
import os, signal

from time import sleep, time

# Private Attributes
__CALIBRATE      = "1"
__TEST           = "2"
__FILTER         = "3"
__QUIT           = "q"

class UltraSonic():
    # Ultrasonic sensor class 
    
    def __init__(self, TRIG, ECHO, offset = 0.5):
        # Create a new sensor instance
        self.TRIG = TRIG
        self.ECHO = ECHO
        self.offset = offset                             # Sensor calibration factor
        GPIO.setmode(GPIO.BCM)
        GPIO.setup(self.TRIG, GPIO.OUT)                  # Set pin as GPIO output
        GPIO.setup(self.ECHO, GPIO.IN)                   # Set pin as GPIO input

    def __str__(self):
        # Return string representation of sensor
        return "Ultrasonic Sensor: TRIG - {0}, ECHO - {1}, Offset: {2} cm".format(self.TRIG, self.ECHO, self.offset)

    def ping(self):
        # Get distance measurement
        GPIO.output(self.TRIG, GPIO.LOW)                 # Set TRIG LOW
        sleep(0.1)                                       # Min gap between measurements        
        # Create 10 us pulse on TRIG
        GPIO.output(self.TRIG, GPIO.HIGH)                # Set TRIG HIGH
        sleep(0.00001)                                   # Delay 10 us
        GPIO.output(self.TRIG, GPIO.LOW)                 # Set TRIG LOW
        # Measure return echo pulse duration
        while GPIO.input(self.ECHO) == GPIO.LOW:         # Wait until ECHO is LOW
            pulse_start = time()                         # Save pulse start time

        while GPIO.input(self.ECHO) == GPIO.HIGH:        # Wait until ECHO is HIGH
            pulse_end = time()                           # Save pulse end time

        pulse_duration = pulse_end - pulse_start 
        # Distance = 17160.5 * Time (unit cm) at sea level and 20C
        distance = pulse_duration * 17160.5              # Calculate distance
        distance = round(distance, 2)                    # Round to two decimal points

        if distance > 2 and distance < 400:              # Check distance is in sensor range
            distance = distance + self.offset
            print("Distance: ", distance," cm")
        else:
            distance = 0
            print("No obstacle")                         # Nothing detected by sensor
        return distance

    def calibrate(self):
        # Calibrate sensor distance measurement
        while True:
            self.ping()
            response = input("Enter Offset (q = quit): ")
            if response == __QUIT:
                break;
            sensor.offset = float(response)
            print(sensor)
            
    @staticmethod
    def low_pass_filter(value, previous_value, beta):
        # Simple infinite-impulse-response (IIR) single-pole low-pass filter.
        # ß = discrete-time smoothing parameter (determines smoothness). 0 < ß < 1
        # LPF: Y(n) = (1-ß)*Y(n-1) + (ß*X(n))) = Y(n-1) - (ß*(Y(n-1)-X(n)))
        smooth_value = previous_value - (beta * (previous_value - value))
        return smooth_value
        

def main():
    sensor = UltraSonic(8, 7)       # create a new sensor instance on GPIO pins 7 & 8
    print(sensor)

    def endProcess(signum = None, frame = None):
        # Called on process termination. 
        if signum is not None:
            SIGNAL_NAMES_DICT = dict((getattr(signal, n), n) for n in dir(signal) if n.startswith('SIG') and '_' not in n )
            print("signal {} received by process with PID {}".format(SIGNAL_NAMES_DICT[signum], os.getpid()))
        print("\n-- Terminating program --")
        print("Cleaning up GPIO...")
        GPIO.cleanup()
        print("Done.")
        exit(0)

    # Assign handler for process exit
    signal.signal(signal.SIGTERM, endProcess)
    signal.signal(signal.SIGINT, endProcess)
    signal.signal(signal.SIGHUP, endProcess)
    signal.signal(signal.SIGQUIT, endProcess)

    while True:
        action = input("\nSelect Action - (1) Calibrate, (2) Test, or (3) Filter: ")

        if action == __CALIBRATE:
            sensor.calibrate()
        elif action == __FILTER:
            beta = input("Enter Beta 0 < ß < 1 (q = quit): ")
            filtered_value = 0
            if beta == __QUIT:
                break;
            while True:
                filtered_value = sensor.low_pass_filter(sensor.ping(), filtered_value, float(beta))
                filtered_value = round(filtered_value, 2)
                print("Filtered: ", filtered_value, " cm")
        else:
            sensor.ping()

if __name__ == "__main__":
    # execute only if run as a script
    main()

## Copyright (c) 2017, Reefwing Software
## All rights reserved.
##
## Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are met:
##
## 1. Redistributions of source code must retain the above copyright notice, this
##   list of conditions and the following disclaimer.
## 2. Redistributions in binary form must reproduce the above copyright notice,
##   this list of conditions and the following disclaimer in the documentation
##   and/or other materials provided with the distribution.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
## ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
## WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
## DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
## ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
## (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
## LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
## ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.



Wednesday, March 15, 2017

Controlling the Raspberry Pi via a web browser

Web Controlled Robot





Now that we can stream video to a web page it would be nice to be able to remotely control our robot. To do this we will us the Raspberry Pi to run a web server that serves the page used to control the robot. Once we have this up and running you will be able to drive your robot around using a browser on your laptop via WiFi on your LAN.

As shown in the previous post, you can use the python command print(server) to see what URL you need to point your browser at to see the video and control your robot. The way the controls work is as follows:
  1. Typing the address of your Pi served page (e.g. http://192.168.0.9:8082) into your browser will send a web request to the python program running the server, in our case RS_Server.py.
  2. RS_Server responds with the contents of index.html. Your browser renders this HTML and it appears in your browser.
  3. The broadcasting of video data is handled by the broadcast thread object in RS_Sever. The BroadcastThread class implements a background thread which continually reads encoded MPEG1 data from the background FFmpeg process started by the BroadcastOutput class and broadcasts it to all connected websockets. More detail on this can be found at pistreaming if you are interested. Basically the camera is continually taking photos, converting them to MPEG's and sending them at the frame rate to a canvas in your browser.
  4. You will see below that we have modified the index.html file to display a number of buttons to control our robot. Pressing one of these buttons will send a GET request to the server running on your Pi with a parameter of "command" and the value of the button pressed. We then handle the request by passing on the appropriate command to our MotorControl class. To do this we will need to bring together RS_Server and RS_MotorControl in our new RS_Robot class.

Modifying index.html



The index.html file provided by pistreaming just creates a canvas in which to display our streaming video. To this we will add a table with 9 command control buttons for our robot. You could get away with only 5 (Forward, Back, Left, Right and Stop) but looking ahead we know we will also need 4 more (speed increase, speed decrease, auto and manual). Auto and Manual will toggle between autonomous control and remote control (i.e. via the browser). Associated with each button is a JavaScript script that will send the appropriate command when the button is clicked.

In addition to controlling your robot via the on screen buttons you can use the keyboard. We have mapped the following functionality:

Up Arrow      = Forward
Down Arrow = Back
Left Arrow    = Left
Right Arrow  = Right
Space             = Stop
-                     = Decrease Speed
+                    = Increase Speed
m                   = Manual
a                    = Autonomous

You can modify the index.html to map whatever keybindings you want. Be aware that the keycode returned by different browsers isn't always consistent. You can use the JavaScript Event KeyCode Test Page to find out what key code your browser returns for different keys.

The manual and auto modes don't do anything at this stage. 

The modified index.html file is shown below.

<!DOCTYPE html>
<html>
<head>
    <meta name="viewport" content="width=${WIDTH}, initial-scale=1"/>
    <title>Alexa M</title>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript" charset="utf-8"></script>

    <style>
        .controls {
            width: 150px;
            font-size: 22pt;
            text-align: center;
            padding: 15px;
            background-color: green;
            color: white;
        }
    </style>

    <style type="text/css">
            body {
                background: ${BGCOLOR};
                text-align: center;
                margin-top: 2%;
            }
            #videoCanvas {
                // Always stretch the canvas to 640x480, regardless of its internal size.
                width: ${WIDTH}px;
                height: ${HEIGHT}px;
            }
    </style>

    <script>
    function sendCommand(command)
    {
        $.get('/', {command: command});
    }
    
    function keyPress(event)
    {
        keyCode = event.keyCode;
        
        switch (keyCode) {
            case 38:                // up arrow
                sendCommand('f');
                break;
            case 37:                // left arrow
                sendCommand('l');
                break;
            case 32:                // space
                sendCommand('s');
                break;
            case 39:                // right arrow
                sendCommand('r');
                break;
            case 40:                // down arrow
                sendCommand('b');
                break;
            case 109:               // - = decrease speed
            case 189:
                sendCommand('-');
                break;
            case 107:
            case 187:
                sendCommand('+');   // + = increase speed
                break;
            case 77: 
                sendCommand('m');   // m = manual (remote control)
                break;
            case 65:
                sendCommand('a');   // a = autonomous
                break;
            default: return;        // allow other keys to be handled
        }
        
        // prevent default action (eg. page moving up/down with arrow keys)
        event.preventDefault();
    }
    $(document).keydown(keyPress);
    </script>
</head>

<body>

    <h1><FONT color=white>Alexa M</h1>

    <!-- The Canvas size specified here is the "initial" internal resolution. jsmpeg will
        change this internal resolution to whatever the source provides. The size the
        canvas is displayed on the website is dictated by the CSS style.
    -->
    <canvas id="videoCanvas" width="${WIDTH}" height="${HEIGHT}">
        <p>
            Please use a browser that supports the Canvas Element, like
            <a href="http://www.google.com/chrome">Chrome</a>,
            <a href="http://www.mozilla.com/firefox/">Firefox</a>,
            <a href="http://www.apple.com/safari/">Safari</a> or Internet Explorer 10
        </p>
    </canvas>
    <script type="text/javascript" src="jsmpg.js"></script>
    <script type="text/javascript">
        // Show loading notice
        var canvas = document.getElementById('videoCanvas');
        var ctx = canvas.getContext('2d');
        ctx.fillStyle = '${COLOR}';
        ctx.fillText('Loading...', canvas.width/2-30, canvas.height/3);
        // Setup the WebSocket connection and start the player
        var client = new WebSocket('ws://${ADDRESS}/');
        var player = new jsmpeg(client, {canvas:canvas});
    </script>

    <table align="center">
    <tr><td  class="controls" onClick="sendCommand('-');">-</td>
        <td  class="controls" onClick="sendCommand('f');">Forward</td>
        <td  class="controls" onClick="sendCommand('+');">+</td>
    </tr>
    <tr><td  class="controls" onClick="sendCommand('l');">Left</td>
        <td  class="controls" onClick="sendCommand('s');">Stop</td>
        <td  class="controls" onClick="sendCommand('r');">Right</td>
    </tr>
    <tr><td  class="controls" onClick="sendCommand('m');">Manual</td>
        <td  class="controls" onClick="sendCommand('b');">Back</td>
        <td  class="controls" onClick="sendCommand('a');">Auto</td>
    </tr>
    </table>

</body>
</html>

Python Robot Class


As Alexa M continues to evolve, so too will this robot class. For now we can keep things pretty simple. In addition to creating a robot class we have updated the motor control, servo and server classes. Rather than reproduce all the code, we will provide links to our Gist Repository where you can download the latest versions. For completeness, I will also provide links to the HTML and JavaScript library that you will need. All these files need to be in the same directory.

  1. RS_Robot.py version 1.0 - Run this script on your Pi to create a telepresence rover.
  2. RS_Server.py version 1.1 - Updated to include command parsing.
  3. RS_MotorControl.py version 1.1 - New motor control methods.
  4. RS_Servo.py version version 1.2 - License added.
  5. index.html version 1.0 - The file shown in the previous section.
  6. jsmpg.js - Dominic Szablewski's Javascript-based MPEG1 decoder.
That completes the remote control and video streaming portion of the design. We hope you have as much fun driving around your robot as we do. Next up we will look at battery monitoring and autonomous control of the robot.