20160609

Using the SN75176A in NeoPixel & Other Applications

I have received many great comments and emails regarding my post on Driving WS2812/NeoPixels RGB LEDS over CAT5 Ethernet Cable, and it is in fact one of my most popular posts. A few people have requested that I go into a bit more detail on how the SN75176A chip is used in this application, so in this post I will explain a bit more detail about using this device.

The SN75176A is a Differential Bus Transceiver, which means that it is used to convert back and forth between a normal "single-ended" signal on one side and a "differential signal" on the other. There is a good Wikipedia article that goes into great detail on the benefits of differential signaling, but the quick overview is that it provides lower signal degradation and better noise immunity than single-ended signaling, allowing us to send our NeoPixel data over much longer distances.

There seems to be a bit of confusion with some readers regarding the SN75176A's ability to be used for both to sending and receiving differential signals, so let's see if we can clear things up a bit. This device contains both a single-ended to differential "transmitter" and a differential to single-ended "receiver". Depending on how a couple of control lines on the device are set, we can enable one or both components as needed. The following diagram shows how things are hooked up inside the chip:



The "A" and "B" pins are the differential signal lines which are connected to the long run of cable between devices. The "D" pin is used to send data on the differential pins and the "R" pin has the received data from the differential lines. Finally, the "RE" and "DE" pins are inputs to the device that enable the receiver and/or transmitter, respectively. Notice the "bar" over the RE, indicating that it is "active-low", meaning the a low input enables the receiver and a high input disables it.

If you want to configure the device as a transmitter, you would connect both the RE and DE pins to VCC – this disables the receiver and enables the transmitter so that any data applied to the D line is sent to the differential outputs on A and B. Now if you want to configure the device as a receiver, you would connect the RE and DE pins to ground, disabling the transmitter and enabling the receiver so that data received on the A and B pins is converted to a single-ended signal on the R pin.

You may be wondering what happens if the RE and DE signals are not tied to the same high/low levels. If you connect the RE low and the DE high, both the transmitter and receiver are enabled, and any data sent out using the D pin is "looped-back" or echoed on the "R" pin. This can be useful in some configurations to confirm data being sent or to detect collisions with other transmitters on the differential lines. The other configuration is when the RE pin is high and the DE pin is low. In this configuration, both the transmitter and receiver are disabled, effectively preventing data from being sent or received on the differential pins.

Now it's not needed in my NeoPixel application because data is always flowing in the same direction, but if you needed to both send and receive data over the differential pins, you could connect the RE and DE lines to your microcontroller, allowing it to select the data direction, either transmitting or receiving as needed. Notice that even though you can enable both the transmitter and receiver at the same time, you are not able to send and receive data between two such devices at the same time, as only one device at a time can be in transmit mode, otherwise the data gets corrupted (just like two or more people talking at the same time makes it difficult to understand what anyone is saying).

Hopefully this explanation will help out some of you that have been looking to tweak or modify the circuit in my long-distance NeoPixel post, or just give you a better understanding of how the design works, and maybe inspire you to try applying this idea in your own design. Also, thanks to Leo B. for prompting me to finally get around to writing this post!

Read More......

20150523

Giving VirtualBox Guests Access to the Internet Without Exposing the Host's Network

I have a virtual machine running under VirtualBox that needs to be able to get updates from the Internet. Using VirtualBox's normal network interface methods (NAT or Bridging), the guest machines not only have access to the Internet, but also to all the interfaces on the host machine and their networks! Googling around for a solution didn't turn up anything useful, and VirtualBox seems reluctant to provide a Internet only option either.
 
After trying for about a day to mess with various iptables solutions and coming up empty handed, I decided to try a different approach to the problem. My solution is to create another “router” virtual machine and connect the main virtual machine through it to the rest of the network. All network traffic for the guest would go through an “Internal Network” connection to the router VM, and then the router would provide NAT, DHCP, and DNS services for the guests. This solution also has the benefit of providing multiple guest VMs with Internet-Only connections.
 

The router VM is very lightweight and doesn't require too much in the way of resources. I used Ubuntu Server 14.04.2 (32 bit), creating a virtual machine with 512MB of RAM and 10GB hard drive. (You could probably get away with less RAM and drive space, but I haven't played with trimming it down yet.) The secret sauce is in how the network adapters for these machine are configured. For the guest machine, you need to set up a single network adapter on an “Internal Network” (named vbx-router in this case). You can do this from VirtualBox GUI or the command line as follows:
vboxmanage modifyvm "guest-vm-name" --nic1 intnet --intnet1 vbx-router

The router VM will have two adapters, the first one bridged to the host's main network interface (typically eth0 on Linux hosts), and the second one using the same internal network we defined for the guest VM. The command line for this would look something like:
vboxmanage modifyvm "router-vm-name" --nic1 bridged --bridgeadapter1 eth0
vboxmanage modifyvm "router-vm-name" --nic2 intnet --intnet2 vbx-router

After installing the Ubuntu 14.04.2 x32 Server (you can use your favorite flavor of non-GUI Linux, your mileage may vary), make sure it is up-to-date (sudo apt-get update and sudo apt-get upgrade on Ubuntu/Debian). It's also probably a good idea to install OpenSSH Server, especially if your virtual machines are headless like mine. Next, it's time to install and configure the routing services. These instructions are based off a really useful blog post over at The Novian Blog.
 

Configure the Interfaces

Edit the /etc/network/interfaces file so it looks similar to this:

# The loopback network interface auto lo iface lo inet loopback # The WAN (bridged) interface auto eth0 iface eth0 inet dhcp # The LAN (internal) interface auto eth1 iface eth1 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255

You can set up the WAN interface with a static IP if you'd like, but the LAN interface should be static so that guest VMs can always find it. The addresses for the LAN interface were chosen to be similar to the default NAT configuration provided by VirtualBox.
 

Install and configure DNSmasq

DNSmasq is a simple to setup DHCP server and DNS forwarder, install it with the command:
sudo apt-get install dnsmasq

Then add the following to the bottom of /etc/dnsmasq.conf:
interface=eth1
domain=home.teknynja.com
dhcp-range=10.0.2.10,10.0.2.99,12h

Of course you will want to change the domain to something suitable for your network.
 

Enable IP Forwarding

Un-comment the following line in /etc/sysctl.conf:
net.ipv4.ip_forward=1

Configure iptables

Create the file /etc/iptables.rules:
*nat
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT

*filter
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i eth0 -j DROP
COMMIT

*raw
-A PREROUTING -i eth1 -d 192.168.0.0/16 -j DROP
COMMIT

This configuration does the following:
  • Sets up NAT outbound on eth0
  • Allows all inbound localhost traffic
  • Allows inbound established connections
  • Allows SSH connections from WAN to our router
  • Drops anything else coming in from eth0
  • Drop packets coming from eth1 destined for host's local network(s)
Activate the rules (as a quick sanity check before rebooting!):
sudo iptables-restore < /etc/iptables.rules

Now try to ssh into the router and verify that you can connect. Once you verify that the rules are working, configure iptable rules to load on network startup. Add the following to /etc/network/interfaces after the line iface lo inet loopback:
pre-up iptables-restore < /etc/iptables.rules


 

Profit!

Reboot your router virtual machine to make sure it loads all your new configuration. Next double check your VirtualBox network configuration on the guest VM then go ahead start it. Check to make sure the guest picks up an IP address from the router VM, and proceed to test your guest's network. You should now be able to access the Internet, but be unable to access anything on your local network (including the host!)
 
Hopefully this will help you create an isolated guest with Internet access in your setup. I've been needing something like this for a while, and now that I've figured it out I felt like I needed to share. If you see anything wrong with this setup, or know how to make it more secure, please feel free to leave a comment.

Read More......

20150517

How to Remove the GUI from your Raspberry Pi

I was getting prepared to start another headless Raspberry Pi project (an IoT gateway) and reached for my old standby command line operating system, Minibian. I grabbed the latest version, copied the image to the memory card, and started setting things up. I ran into a problem when I tried to get the RaLink RT5370-based USB WiFi Adapter working - the kernel seemed to recognize the device, but I just couldn't get it working. After digging through the system logs it became apparent the firmware required by the adapter was not present in Minibian (Only later did I realize that I may have just needed to install a package to get the required firmware). After trying to get the WiFi adapter to work for a few hours, I gave up and switched to Raspbian. I was then able to get everything working (including the WiFi adapter), but was left with a GUI and it's associated bloat that I didn't really need, so I set about seeing what packages could be removed and still leave me with a fully functioning command-line based system.
 
Initially, I just did a search on the web and found a few different posts and conversations that dealt with removing the GUI and combined them to get things slimmed down quit a bit. After that, I started listing the installed packages and removing the ones that looked like they would not be needed. (It was at this point I noticed the firmware-ralink package that I could have probably installed on Minibian to get the WiFi adapter working there – maybe next time).
 
My first stop was a conversation on raspberrypi.stackexchange.com where it was suggested that you could rip out the X window system by the roots by simply removing ''libx11-.*''. That did remove a lot of packages from the system! Other blog posts like this one at Richard's Ramblings added to the list of packages to remove.
 
Finally, I used the dpkg --get-selections | grep -v deinstall command (thanks askubunut.com!) to list all the remaining packages on the Raspberry Pi and removed all the ones that look like I could do without. There were a few times when I removed too much and had to re-install a package or two, but eventually boiled it down to the following commands to convert a normal GUI Raspbian installation to a lean command-line only version (Be sure you don't have any important files or configuration on your system before doing this, and don't blame me if your mission critical Raspberry Pi application gets lost in the process!).  
A word of caution: One of the uninstalled packages took the /etc/network/interfaces file with it, so before stripping all these packages, you should make a copy somewhere else on the device and then restore it before rebooting your system, or you will have no networking available after rebooting!
 
sudo apt-get remove --auto-remove --purge libx11-.*
sudo apt-get remove --purge raspberrypi-artwork triggerhappy shared-mime-info gcc-4\.[0-7].*
sudo apt-get remove --purge gdb gdbserver penguinspuzzle samba-common omxplayer
sudo apt-get remove --purge alsa-.* build-essential gstreamer1.0-.* lxde-icon-theme
sudo apt-get remove --purge desktop-file-utils gnome-themes-standard-data menu menu-xdg
sudo apt-get autoremove
After all was said and done, I was able to reduce the size of the file system from 2.5GB down to just 800MB. Along with the size savings, there are also fewer programs running and fewer packages that need updating. Not to mention having less software on the system creates a smaller attack area for hackers to leverage.
 
So if you find that you need to remove the GUI from your Pi, hopefully this information will help you with your cleaning task. And of course, if you need something even smaller, there's always Minibian for a really stripped-down configuration.

Read More......

20140301

Raspberry Pi+Minibian: Adjusting the Locale and Time Zone

I really have enjoyed playing with the Raspberry Pi, but many of my projects don't require a GUI (for example OpenVPN and web servers). For projects like these, I usually install Minibian which is a great, small version of Raspbian with all the GUI goodness removed. It is a perfect starting point for my headless projects, except that out of the box, it comes setup with locale and timezones configured for Great Britian. This causes some strange characters to come up when typing on the keyboard, and the clock doesn't reflect my local time zone.

Fortunately, these issues are easy to address, and I will show you the steps I take to configure my system for my locale and time zone (us-EN & America/Los_Angeles). Just use your locale and time zone in the steps below to configure your system as desired.

  1. Install Minibian on your SD card following the provided instructions. Insert the card into your Raspberry Pi, power it up, and log in. (The default username/password is root/raspberry). You may also want to change the name of your system from the default "raspberrypi" by editing the /etc/hostname file.
  2. Change the keyboard locale. Run the following command:
    dpkg-reconfigure keyboard-configuration
    then
    • Set your "Keyboard model" to "Generic 105-key (Intl) PC".
    • Set the "Country of origin for the keyboard" and "Keyboard layout" both to "English (US).
    • Next for the "Key to function as AltGr" choose "The default for the keyboard layout".
    • For "Compose key" I chose "No compose key".
    • Finally, choose "No" for "Use Control+Alt+Backspace to terminate the X server".
    Once the command completes, reboot your Raspberry Pi to pickup the new keyboard layout.
  3. Now change the system local. Run the command
    dpkg-reconfigure locales
    and de-select "en_GB.UTF-8 UTF-8" and select "en_US.UTF-8 UTF-8", then set the default locale to "en_US.UTF-8" on the next screen.
  4. Edit the /etc/default/locale file and change/add the following lines:
    LANG=en_US.UTF-8
    LC_ALL=en_US.UTF-8
    LANGUAGE=en_US.UTF-8
    then log out and back in again to pick up the new locale.
  5. Finally, run
    dpkg-reconfigure tzdata
    and select "America", then "Los_Angeles" to set the local time zone for your system.
  6. Now that your system understands you a little better, it's probably a good time to update your system using
    apt-get update
    apt-get upgrade
    then reboot your system again.

Optional Tweaks


If you'd like to make your system a little more secure, you can enable sudo and disable the root login with the following steps:
  1. Install sudo:
    apt-get install sudo
  2. Create your new user. For example, to add the user "bob" run the command
    adduser bob
    and follow the prompts.
  3. Add your new user to the sudoer's list. Run visudo then add the following line (changing the username to the user you just added) after the root ALL=(ALL:ALL) ALL line:
    bob ALL=(ALL:ALL) ALL
  4. Now disable the root user with the command
    passwd -l root
    (note that is a lower-case L, not a one)
  5. Now log out of your root session and log in as your new user.
Finally, it's a good idea to change the SSH server keys because the ones that come with the image file make it easy to impersonate your system (since everybody has access to those keys):
sudo rm /etc/ssh/ssh_host_*
sudo dpkg-reconfigure openssh-server

Now you should have a headless Raspberry Pi system that you can use for various server projects that is customized for your locale and timezone! If you have any questions or comments on this procedure, please feel free to leave a comment.

Read More......

20140208

Driving WS2812/NeoPixels RGB LEDS over CAT5 Ethernet Cable

I was recently working on a project using Adafruit NeoPixels (WS2812) RGB LED strips where a single controller was supposed to drive five strings of 30 pixels that were physically located several feet apart. I knew right away that there was going to be a couple of problems trying to drive the strings over more than 20 feet of cable. First off there is the voltage drop issue - these strings can draw several amps, and could easily drop around 1 volt or more, depending on the cabling and connectors. The other issue is the integrity of the data signal being sent to the string. The pixels are very timing sensitive, and noise or ringing caused by a long wire run could easily introduce errors, or even result in a completely non-functioning string. This last issue was made even worse by my choice of controller, the Teensy 3.0 which is a 3.3V part, while the pixels require a 5V input. (By the way, the Teensy 3.0/3.1 + the OctoWS2811 library is an awesome choice for driving dozen to hundreds of RGB pixels!)

While it seems that many people have successfully driven NeoPixel/WS2812 strings directly from the Teensy using a small resister to reduce ringing, I was pretty sure that wouldn't work over any appreciable distance. It also appears that the newer WS2812B pixels won't work at all with just a 3.3V signal. It didn't take me too long to come up with the solution for the data issue - RS-422/485 drivers and receivers! Since the pixel data is really just serial data, I figured using RS-485 balanced transmission lines to send the data would work perfectly. The data signal is well within the bandwidth these chips are capable of, and as a bonus, the SN75174 Quad Differential Line Driver IC inputs will easily accept the 3.3V outputs from the controller! On the receiving end, I went with SN75176 Differential Bus Transciever chips because I had several of those on hand in my parts bin. I just needed the receiver part of the chip, and it was quite easy to disable the transmitter portion.

Next up was the power-drop issue. Right about the same time I was dealing with this I discovered these LM2596 DC-DC Buck Converters on Amazon - they were perfect! Their low cost, small size, high efficiency and wide input voltage range made them a snap to integrate into my project. (Just make sure you check/adjust the output voltage on these before you use them!). Now I could feed 12V into my long cables feeding the remote strings, and use these power supplies on the receiving side to drop it down to the needed 5V. Since I was only driving about 30 pixels on each line, the 3A output was more than enough to drive each string.

With all the pieces in place, it was time to test out my ideas. First, I wanted to test out driving the data signal using the differential converts, so I wired up a transmitter and receiver on each end of 100 feet of CAT5 cable, connected up my controller to the transmitter and 5 meters (150 pixels) on the receiving end. For now, I just connected up a beefy 5V/10A power supply at the receiving end to power the string. After connecting everything up, it worked like a charm! Every pixel was responding as though it was sitting right next to the controller.

Next up, I cut the string up into 5 lengths of 30 pixels each, and wired them up as shown below, using the differential transmitters/receivers and the DC/DC converters (again making sure they were adjusted to provide 5V output!), powering everything with a large 12V power supply. Everything worked as expected, the pixels were changing colors as commanded, with no color shift or dimming caused by voltage drops.

Notice that I am calling out the color codes for the CAT5 wiring - it is important that at least the data wires are on twisted pair of conductors (blue/blue-white in this case). I also like to have the power wires paired as well, with all the +V connections on the solid wires and the -V connections on the white-striped wires, which helps with noise immunity on the power lines. In my project, I used 4 position connectors and just tied all the +V wires together on one terminal and all the -V wires on another (with the data on the remaining two terminals). Using all the extra wires in the cable for power helps reduce the resistance, and thus the power drop on the cables.

In the end, my project worked well, with the strips connected via 15 - 25 foot lengths of CAT5 cable all connecting back to my central controller. If you have a project where you need to have several remote RGB pixel strips all controlled from a single controller, hopefully by using this approach you can make your pixels work just like they were connected directly to the controller. As usual, your feedback is always appreciated, and if you have any questions please feel free to leave a comment.

UPDATE: I've just added a new post that goes into more detail about using the SN75176A chip in this design, for those of you who have asked about variations of the above circuit.

Read More......

20130609

Making kvm/qemu/libvirt Play Nice with PulseAudio on a Headless Ubuntu 12.04 Server

I've been running over a dozen virtual machines on my headless server for almost two years now, and for all that time I've always missed being able to hear the audio from those machines. I would occasionally try to figure out how to make audio work over VNC, but never could find a solution on the Internet. Finally last week I decided to at least get part-way there by getting the audio to play on the server's speaker port. The first step was pretty easy – installing PulseAudio on the server:

sudo apt-get install pulseaudio

Now from what I could gather on the Internet, it seems like I needed to run PulseAudio in system mode, despite all the warnings that it should probably not be run that way. I figured that since I don't usually have any logged in users on the system, it would just be better to have it running all the time. In order to do that, I edited the /etc/default/pulseaudio file, and changed the following settings to:

PULSEAUDIO_SYSTEM_START=1
DISALLOW_MODULE_LOADING=1

Then I added my user and the libvirt-qemu user to the pulse-access group:

sudo adduser myuser pulse-access
sudo adduser libvirt-qemu pulse-access

You'll need to log out and back in again for the new group to be picked up on your shell. Finally, I started the PulseAudio service:

sudo service pulseaudio start

Now a quick test to make sure the sound subsystem was working:

paplay test-sound.wav

In my case, I could barely hear the sound playing, so I did a pactl list sinks to figure out which sink was being used, then issued
pactl set-sink-volume 1 100%
pactl set-sink-mute 1 0
to set the volume level of sink 1 to the maximum and unmute it. Now I could hear the sound just fine!

The next hurdle was to get the sound from the virtual machines to play through PulseAudio. It turns out there are quite a few obstacles to achieving that goal. First off, libvirt automatically disables audio if you are using a VNC client! It turns out to be fairly simple to fix that though, simply edit /etc/libvirt/qemu.conf and change the following setting to:

vnc_allow_host_audio = 1

After restarting the libvirt daemon using sudo service libvirt-bin restart I could see in the syslog file that libvirt/kvm was trying to use the PulseAudio subsytem, but apparmor was blocking access to several key files/directories. I never did find a working answer by Googling, but I worked out the following settings for the /etc/apparmor.d/abstractions/libvirt-qemu file. I changed /{dev,run}/shm r, to /{dev,run}/shm rw, then added /{dev,run}/shm/pulse* rw, right after that line. Finally I added /var/lib/libvirt/.pulse-cookie rwk, (note the trailing commas on those lines!) then told apparmor to reload the configuration:

sudo invoke-rc.d apparmor reload

I fired off a Windows XP x32 guest, and was able to hear sound, but it was very distorted and choppy. The solution to that was to change the sound hardware in the virtual machine's configuration file from <sound model='ac97'> to <sound model='es1370'>. After that, I was getting perfect sound from my virtual machine!

Now for a few caveats – it seems that changing any of the PulseAudio configuration or restarting the service while the virtual machine is running can cause problems like the sound no longer working, all the way to the virtual machine's OS hanging up trying to play sounds. So once you started your virtual machine, leave things alone! I have also been working on trying to forward the sound over the network to my workstation, but so far I am having mixed results with that. Hopefully I'll have another post soon describing how to make that work.

And here is the usual warning that goes with tweaking your system like this: These instructions worked for me, but your mileage may vary. Also, I won't be responsible if any of this causes your machine to stop working or catch on fire – but this stuff should be pretty straight-forward and not cause any serious issues that can't be reversed. Hopefully my adventure will help you to enjoy hearing from your virtual machines. If you have any questions or corrections, please feel free to post them in the comments.

Read More......

20130527

Time-Lapse Video Capture From Network Cameras (Linux)

I have several network cameras watching the outside of my home, monitored by ZoneMinder. I have it set up so that when there is motion detected, it will record for several seconds and send me an email with stills of the incident. While this is nice and gives me a little peace-of-mind, I've always thought about having it record continuously. While it is easy enough to do in ZoneMinder, I didn't really want to use up that much storage recording video and then have to scroll through it to find anything interesting.

The other day I saw a blog post where someone was using a Raspberry Pi and a webcam to do some time-lapse photography, and that sparked an idea that seemed easy enough to do in an afternoon – I could come up with a Python script to grab images from the network cameras at fixed intervals, and write them to a video file in order to generate a time-lapse video!

The first step was to figure out how to build a video file a frame at a time using Python. I had played with the motion-jpeg (mjpeg) format in the past, which pretty much consists of jpeg images streamed one after the other in a file (sometimes with a boundary record between them). I discovered that I could simply capture and append jpeg images to a file and get a video file that could be read by a few video players and converters. Best of all, I could use a simple avconv (formerly ffmpeg) command to convert the mjpeg files to mp4, which is smaller and viewable by almost any player.

Next, I wanted to be able to time-stamp each image so that I could tell when the video was created. For this I stumbled across the Python Imaging Library (PIL) which supports several image formats, including jpeg. Using it, I was able to select a font and write a time-stamp on each image as it was captured before adding it to the mjpeg video file. If it isn't already installed on your system, you can install it using

sudo apt-get install python-imaging
for Debian-based systems or by using the appropriate package manager for your distro.

With all the pieces in place, I developed a little Python script that periodically grabs images from several network cameras and builds a separate mjpeg file for each of them:

talicam.py:
#!/usr/bin/python

# Number of seconds between frames:
LAPSE_TIME = 30

# Name of truetype font file to use for timestamps (should be a monospace font!)
FONT_FILENAME = "UbuntuMono-B.ttf"

# Format of timestamp on each frame
TIMESTAMP_FORMAT = "%Y-%m-%d %H:%M:%S"

# Command to batch convert mjpeg to mp4 files:
#  for f in *.mjpeg; do echo $f ; avconv -r 30000/1001 -i "$f" "${f%mjpeg}mp4" 2>/dev/null ; done

import urllib
import sys, time, datetime
import StringIO
import Image, ImageDraw, ImageFont

class Camera:
    def __init__(self, name, url, filename):
        self.name = name
        self.url = url
        self.filename = filename
        
    def CaptureImage(self):
        camera = urllib.urlopen(self.url)
        image_buffer = StringIO.StringIO()
        image_buffer.write(camera.read())
        image_buffer.seek(0)
        image = Image.open(image_buffer)
        camera.close()
        return image
        
    def TimestampImage(self, image):
        draw_buffer = ImageDraw.Draw(image)
        font = ImageFont.truetype(FONT_FILENAME, 16)
        timestamp = datetime.datetime.now()
        stamptext = "{0} - {1}".format(timestamp.strftime(TIMESTAMP_FORMAT), self.name)
        draw_buffer.text((5, 5), stamptext, font=font)

    def SaveImage(self, image):
        with open(self.filename, "a+b") as video_file:
            image.save(video_file, "JPEG")
            video_file.flush()

    def Update(self):
        image = self.CaptureImage()
        self.TimestampImage(image)
        self.SaveImage(image)
        print("Captured image from {0} camera to {1}".format(self.name, self.filename))


if __name__ == "__main__":
    cameras = []
    cameras.append(Camera("porch", "http://username:password@10.17.42.172/SnapshotJPEG?Resolution=640x480&Quality=Clarity", "cam1.mjpeg"))
    cameras.append(Camera("driveway", "http://username:password@10.17.42.174/SnapshotJPEG?Resolution=640x480&Quality=Clarity", "cam2.mjpeg"))
    cameras.append(Camera("backyard", "http://username:password@10.17.42.173/SnapshotJPEG?Resolution=640x480&Quality=Clarity", "cam3.mjpeg"))
    cameras.append(Camera("sideyard", "http://10.17.42.176/image/jpeg.cgi", "cam4.mjpeg"))
    cameras.append(Camera("stairway", "http://10.17.42.175/image/jpeg.cgi", "cam5.mjpeg"))
    
    print("Capturing images from {0} cameras every {1} seconds...".format(len(cameras), LAPSE_TIME))
    
    try:
        while (True):
            for camera in cameras:
                camera.Update()
                
            time.sleep(LAPSE_TIME)
            
    except KeyboardInterrupt:
        print("\nExit requested, terminating normally")
        sys.exit(0)

Notice the URLs supplied in the Camera constructors. These are specific to each brand of camera, but you can usually find the format with a little Googling. In my program above, the first three cameras are Panasonic BL-C101A network cameras, the last two are a D-Link DCS-930L and a TrendNet TV-IP551W which both have very similar software and URLs.

The font file referenced above needs to be located in the same directory as the Python script, and for best results should be a mono-space font. I just grabbed the Ubuntu Monospace Bold TrueType font file for use here, but you could use anything you like.

You will probably want to launch this as a background task so that it can run for extended periods of time. I have it running on the same server that runs my ZoneMinder setup, so it can run 24-7 collecting time-lapse video. I also wrote a quick little script file that iterates the mjpeg files it finds and converts them to mp4 for easier viewing and archiving:

mjpeg2mp4:
#!/bin/bash

echo "Removing old files..."
rm -fv *.mp4

echo "Converting files to mp4..."
for f in *.mjpeg ; do
    t=${f%mjpeg}mp4
    echo "  Converting $f to $t"
    avconv -r 30000/1001 -i "$f" -q 5 "$t" 2>/dev/null
done

echo "Done!"

I had a lot of fun learning a few new tricks while working on this, and hopefully you can use it as a starting point for your own time-lapse adventure. If you find this post useful, or have questions about how it works, please leave a comment below.

Read More......
 
Template design by Amanda @ Blogger Buster