Wednesday, June 14, 2017

Best IP POE camera compatible with Blue Iris

I recently purchased, and ultimately returned, a LaView 4 1080P NVR/POE IP Cam combo from Amazon for $600 (It's listed cheaper now).


They claimed to be ONVIF compliant, but for the life of me I was not able to set up Blue Iris to connect to the cameras. I noticed that the IPs in the NVR chose a subnet at random and I could not connect to the individual cameras directly without passing through the NVR.

When I gave the individual cameras a manual IP it was unreachable. If I bypassed the NVR and went straight to the network with POE, it wouldn't get a DHCP IP.

My old setup was a Night Owl (forget the model number). Since it was the old analog BNC type camera setup, I had no choice but to go through the DVR. Although, I could connect through the streams RTSP directly by addressing the CAMNO (1-4). This was a breeze to add to Blue Iris.

The LaView was a hassle. I thought, with the price, it would beat cheaper NVR alternatives but I found the UI to be clunky, slow and unintuitive.

I have indoor Amcrest wifi cameras and saw they now sell POE outdoor cameras. I've never had an issue with my Amcrest so I decided to get rid of the NVR model and feed from the source.

While I have a POE switch, it does not provide the adequate wattage to power all my cameras so I purchased a standalone 8 port POE injector for $40.


I purchased 2 of each of the Amcrest outdoor POE styles Amcrest IP2M-844E and Amcrest IP2M-842E.

Which match or exceed the specs of the LaView option. They cost $75 each (both models).

The total for the Amcrest setup was $340... almost half the price of the LaView and very easy to set up in Blue Iris.

I also have another 4 ports on my POE injector to add more cameras if I choose.

Long story short, skip the NVR, go directly to your own recording device. If you don't want to pay the proprietary software license for Blue Iris and/or Windows; there are alternatives.

You could set up a Linux box and use Motion by Foswiki to create your own NVR with a lot of flexibility.

Blue Iris costs money, but it does a lot. It's also a GUI app, which I like when dealing with videos due to the instant gratification of knowing if it's working.

Motion works great, but you can get bogged down in options and the Linux+CLI nature is off-putting to most.

One of the downsides to most DVRs, NVRs and standalone IP Cameras has to be the requirement for IE with an addon. I usually just spin up a test VM with Windows and install/config it there... then deleting the VM. I noticed with the LaView this did not work. I could modify the settings, but could not view the live stream. It did not like the virtual VGA adaptor.

The Amcrest worked, but still required IE to set up. At least I was able to verify I had a stream before I added it to Blue Iris. I hope one day IP Cameras just work in Chrome over HTML5 without any plugins or software required.

Saturday, February 11, 2017

Record Google Nest Cam from Blue Iris

Since Nest is pushing their expensive ($10 to $30 per month) cloud recording feature, the Nest Cam is pretty locked down to prevent recording video locally to your own network. There are APIs you can use to scrape some information, but nothing to record video and audio to a local file share/FTP/NAS out of the box.

I use Blue Iris to record my other network cameras and found a hack to do the same on Nest.

There are some privacy concerns using this method, but the tradeoff is you can have both cloud and local storage of video events. Also, more flexibility of live video and video playback.

Requirements: An apache server with PHP installed. Also, obviously, a Nest cam and Blue Iris.

The first step is to allow sharing of the video. No one will be able to access this feed without the url, and if found, they cannot view archived video or manipulate settings.

Nest UUID

Open the Nest camera properties and select "Camera sharing".

Select "Share publicly" and click "Agree & Share"

You will get a URL like this:
https://video.nest.com/live/xxxxxxxxxx
Open that link in a new tab. View the page source and search for UUID. You will have a result like this:

https://nexusapi.dropcam.com/get_image?uuid=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&width=560
(obviously xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is your uuid)

Apache script

On your Apache server, create a directory called "nest"
mkdir /var/www/html/nest && cd /var/www/html/nest
Create a php file called nest.php
vi nest.php
Add this code. (Don't forget to change xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx to your unique uuid)
<?php

{

file_put_contents('image.jpg',fopen("https://nexusapi.camera.home.nest.com/get_image?uuid=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&cachebuster=".rand(10,1000000)."&height=512", 'r'));

}

?>

Verify you can run it with PHP
php nest.php
If you get a file called "image.jpg" in "/var/www/html/nest/" and can browse to http://<yourserverip>/nest/image.jpg you should be ready to add it to blueiris.

Run a watch command to generate that file every 0.1 seconds (or 10 frames a second). Watch does not allow more than this speed.
watch -n 0.1 php nest.php
(There are many ways of doing this that are much better. It can be added to cron and many other things to improve reliability and performance. This is just a simple test and needs a lot of improvement.)

Blue Iris configuration

Once you have your url
http://<yourip>/nest/image.jpg

In Blue Iris, add a camera, go into the video tab and click configure.



Address
<your apache server ip>
Path
/nest/image.jpg
Click OK



Check "Anamorphic (force size)" then change "Image format" to 904 x 512

You can play with the sizes but I've noticed a performance hit when trying to run anything greater than 720p.

Rant...

Until Google updates the firmware to allow local file storage from the Nest camera, this is the only solid way I've found to import the feed and save it locally. I don't want to pay for their cloud service when I can just upload my own clips to dropbox or any other free file service.

Google's integration with Dropcam/Nest doesn't seem well implemented. You would think that Nest would allow you to save your video clips to Google Drive. A perk to paying for the premium Google Drive service should be that you can record your Nest clips there for free. Instead, you are essentially billed twice for storage.

I'd go so far as to say do not purchase this device if you want more options such as using Blue Iris. Amcrest/D-Link/Netgear and dozens of vendors all do the same and are ONVIF compliant... and I can link those to my Google Drive/Dropbox storage for cloud backups.

Nest is not ONVIF compliant and their app sucks. It's the same app as the (awesome) Nest thermostat... but there should be a separate, dedicated app for the Nest cameras. It's too cluttered and not intuitive.

For the same price of a Nest cam, you can get two wifi PTZ cameras with the same quality video (if not better) and many more features such as the ability to save to a MicroSD card.

If your internet goes out with the Nest, your camera stops recording.

Friday, December 11, 2015

Block Comcast Xfinity data cap popup

If you have recently been converted to the Comcast data cap market you will have 2 months to continue getting unlimited data before being charged any overages.

The problem is that you will see html injection popups on any site that is not https:// encrypted.

These cannot be blocked via a hosts file/dns/bind filter. It's generated from http://servicealerts.comcast.net:8080/


While, it's nice they are giving us a couple of months before actually charging, you have to suffer through popups for every 90%, 100%, 110%, 125% usage threshold and up.

To block these, in Adblock, manually edit your filters to include:

##*#comcast_content

If you are curious, the code looks like this:

Friday, November 13, 2015

How to install Guacamole 0.9.8 for Ubuntu 14.04 and secure with Nginx ssl

Guacamole is a pretty straightforward RDP/VNC/SFTP utility that requires no plugins on client systems. It utilizes HTML5 to serve up the connections directly over a browser.

This is a pretty standard install to connect to a windows RDP host. We will be securing the server with UFW, fail2ban and SSL using NGINX as a reverse proxy.

First, enable the firewall and allow the following ports:

sed -i 's/ENABLED=no/ENABLED=yes/g' /etc/ufw/ufw.conf
ufw allow 22 && ufw allow 8080

Install the prerequisite software

apt-get update && apt-get install -y fail2ban build-essential htop libcairo2-dev libjpeg62-dev libpng12-dev libossp-uuid-dev tomcat7
apt-get install -y libfreerdp-dev libpango1.0-dev libssh2-1-dev libtelnet-dev libvncserver-dev libpulse-dev libssl-dev libvorbis-dev

Download and extract the guacamole server files

cd ~
wget http://sourceforge.net/projects/guacamole/files/current/source/guacamole-server-0.9.8.tar.gz
tar -xzf guacamole-server-0.9.8.tar.gz && cd guacamole-server-0.9.8/

Compile and make the program (this may take some time depending on your hardware).

./configure --with-init-dir=/etc/init.d && make && make install

Now we want to update the library cache and update the init scripts so it will start on bootup

ldconfig && update-rc.d guacd defaults

Create the main Guacamole configuration folder and file

cd ~ && mkdir /etc/guacamole

Vi (or, you can use Nano/your file editor of choice) the main configuration which provides the location of the user-mapping.xml file.

vi /etc/guacamole/guacamole.properties
# Hostname and port of guacamole proxy
guacd-hostname: localhost
guacd-port: 4822

# Location to read extra .jar's from
lib-directory: /var/lib/tomcat7/webapps/guacamole/WEB-INF/classes

# Authentication provider class
auth-provider: net.sourceforge.guacamole.net.basic.BasicFileAuthenticationProvider

# Properties used by BasicFileAuthenticationProvider
basic-user-mapping: /etc/guacamole/user-mapping.xml

For the Guacamole client, we are going to add a simple RDP connection. This is highly configurable, so be sure to read up on their site to see all the variables.

The Guacamole website will have a username of guacadmin and a password of guacpass.

For our purposes we are going to connect to a windows box, 192.168.0.25 with the username: winuser and the password winpassword on the standard port 3389. Change these fields to the Windows box you want to connect to.

vi /etc/guacamole/user-mapping.xml
<user-mapping>
 <authorize username="guacadmin" password="guacpass">
  <protocol>rdp</protocol>
  <param name="hostname">192.168.0.25</param>
  <param name="port">3389</param>
  <param name="username">winuser</param>
  <param name="password">winpass</param>
 </authorize>
</user-mapping>

You may need to adjust your remote desktop settings to allow connections from non-NLA authenticated servers, such as this.


Update tomcat to point to the user authentication files.

mkdir /usr/share/tomcat7/.guacamole
ln -s /etc/guacamole/guacamole.properties /usr/share/tomcat7/.guacamole
wget http://sourceforge.net/projects/guacamole/files/current/binary/guacamole-0.9.8.war
cp guacamole-0.9.8.war /var/lib/tomcat7/webapps/guacamole.war

Restart the tomcat and guacamole services

service guacd start && service tomcat7 restart

You should now be able to access your server via port 8080. For instance, my server which is 192.168.0.24

http://192.168.0.24:8080/guacamole/


Log in with the guacadmin/guacpass credentials and it will automatically log you into windows using the credentials you supplied in the user-mapping.xml file.



If you do not want it running in the /guacamole subfolder and want to place it in the server root

service tomcat7 stop
mv /var/lib/tomcat7/webapps/ROOT /var/lib/tomcat7/webapps/ROOT.bkp
mv /var/lib/tomcat7/webapps/guacamole /var/lib/tomcat7/webapps/ROOT
service tomcat7 start

You should now be able to access it via the ip and port: http://192.168.0.24:8080


To further strengthen the server and allow https over a standard port, nginx can be installed side-by-side with tomcat to provide a reverse proxy and allow encryption.

apt-get install -y nginx

Don't forget to update the firewall to allow port 443 and remove port 8080

ufw allow 443 && ufw delete allow 8080

Generate the SSL keys

mkdir /etc/nginx/ssl && openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/guacamole.key -out /etc/nginx/ssl/guacamole.crt

If you are using a DNS name to access the server, or anything other than an IP address, make sure you include the FQDN.



We want to clear out the default nginx configuration and add our own

mv /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bkp
rm /etc/nginx/sites-enabled/default

vi /etc/nginx/sites-available/guacamole
server {
        listen 443 ssl;
        server_name 192.168.0.24;

        access_log   /var/log/nginx/guacamole.access.log ;
        error_log    /var/log/nginx/guacamole.error.log info ;

        ssl_certificate /etc/nginx/ssl/guacamole.crt;
        ssl_certificate_key /etc/nginx/ssl/guacamole.key;

        location / {
        proxy_buffering off;
        proxy_pass  http://127.0.0.1:8080;
        }
}

ln -s /etc/nginx/sites-available/guacamole /etc/nginx/sites-enabled/guacamole

Restart nginx

service nginx restart

You should now be able to access the server via the standard https port

https://192.168.0.24


Saturday, September 5, 2015

Export Oracle Virtualbox signed hardware cert and slipstream it into a Windows install

If you deploy a lot of Windows 7 boxes using Virtualbox and resort to slipstreaming prerequisite files using nlite and/or answer files, you know the fully automated Virtualbox Guest Additions stops and prompts for input due to an unsigned driver from Oracle:



Thus, the install is not fully automated. To get around this, you can export the cert from another Windows guest that has Virtualbox installed using Powershell.

Run powershell with elevated privileges and execute these commands:

cd cert:\LocalMachine\TrustedPublisher
$cert = dir | where { $_.Subject -like "*Oracle*" }
$type = [System.Security.Cryptography.X509Certificates.X509ContentType]::Cert
$bytes = $cert.Export($type)
[System.IO.File]::WriteAllBytes("C:\oracle.cer", $bytes)

You should have an Oracle hex encoded cert in the root of your C: drive named oracle.cert.



To make this easier to install during the slipstream process, encode the cert in a 7zip SFX executable file. (sounds more complicated than it is if you've never used it. It simply creates an executable zip file),


Now, using NTLite, or whatever app of choice, for your post process scripts, we need to extract the cert to the C: drive and import it into the host using certutil.


oracle.exe -oc:\ -y

(There is no space between the o and c in the script above)

certutil -addstore -f "TrustedPublisher" c:\oracle.cer

You should have something like this:


After you compile the ISO, your fully automated installs shouldn't prompt to trust that cert in the middle of your post processing.

Saturday, August 15, 2015

Build a Yahoo Pipes replacement with Tiny Tiny RSS on Ubuntu 14.04

TT-RSS is an open source Yahoo Pipes (and to a large part, Google Reader) replacement. You can filter feeds, import OPML files and access via APIs and Android. It also has plugin support.

I'm going to list the steps to install Tiny Tiny RSS on a clean install of Ubuntu 14.04 using Apache, PostgreSQL, GIT and PHP5.

For demonstration purposes I'm not including the best security practices (SSL, firewall, fail2ban, user permission best practices, etc). Please refer to this great guide to secure your installation.

From a clean Ubuntu 14.04LTS install the following packages:
apt-get update && apt-get install -y apache2 git ntp postgresql-contrib php5 php5-curl php5-cli php5-pgsql

Make sure your hostname is set and update the Apache config to prevent FQDN error messages.
echo "ServerName $HOSTNAME" >> /etc/apache2/apache2.conf

Also, since timing is important with RSS feeds, we installed the NTP service.
service ntp reload

We are going to recycle the standard html output rather than create a virtualhost for demonstration purposes so we will git clone into the generic Apache directory.
git clone https://tt-rss.org/git/tt-rss.git /var/www/tt-rss
mv /var/www/html /var/www/html2 && mv /var/www/tt-rss /var/www/html

Now we are going to create a PostgresSQL user and database. Make sure you enter your own, unique username and password (and document it).
su - postgres
createuser -P -s -e ttpguser

It will prompt you to create a password for the ttpguser (or whichever username you chose).
Now we will create the database
createdb ttrssdb
exit

After you exit, you should be back as root. We need to modify the PostgreSQL client authentication conf file: /etc/postgresql/9.3/main/pg_hba.conf (Use nano if you are more comfortable with it.) and add the user we created in the database.

After this line:
local   all             postgres                                peer
Add this entry (make sure that it's the same username you created earlier):
local   all             ttpguser                                   md5

Update permissions for the cache and various other elements
chmod -R 777 /var/www/html/cache/images
chmod -R 777 /var/www/html/cache/upload
chmod -R 777 /var/www/html/cache/export
chmod -R 777 /var/www/html/cache/js
chmod -R 777 /var/www/html/feed-icons
chmod -R 777 /var/www/html/lock

Restart the services
service postgresql restart && service apache2 restart

Browse to the IP address of the server you created and you will be directed to http://<yourIP>/install if everything went alright.
Enter the following parameters
Username ttpguser
DB name ttrssdb
Hostname (leave blank)
Port 5432

Once that's done, initialize the database. You will get some code you will need to copy and import into your root www directory.
vi /var/www/html/config.php

Paste the code in and save the file. Go back to the root site, http://<yourip> and you should be welcomed with a login screen. The default username and password are admin and password.

To change this, go to Actions -> Preferences -> Users -> Admin -> Change password -> Fill out email (for some reason, cant change password for admin without entering an email address) -> Save -> (Refresh the browser)
Log back in as the admin and the new password. Head back to the user section. Add a new user and create a username you wish to use. Click on it again and enter the password.

You should be able to log in with the non-admin user.

To add a feed, go to Actions -> Subscribe to feed. You will notice the feed does not update. There are multiple ways to update, check the main site for more information on setting cron jobs, etc. For testing, we can simple type
su -c "php /var/www/html/update_daemon2.php" -s /bin/sh www-data&

This will start scrolling text, and will continue to run in the background. If you end the process, the updates will stop. Read the main tt-rss page on updating for more information.

You should see feeds start to appear. You can also import your OPML file now.

Monday, February 23, 2015

Dropbear SSH vulnerabilities in stock Arduino Yun. How to update to OpenSSH remotely.

The Arduino Yún ships with Dropbear version 2011.54-2. There are multiple vulnerabilities with it and is not advised to be used. If your arduino is in a remote location and you want to update to OpenSSH, without losing remote access to the device, follow these steps.

  • Change the Dropbear port to an unused/free one on your box and restart Dropbear
    uci set dropbear.@dropbear[0].Port=2222
    uci commit dropbear
    /etc/init.d/dropbear restart
  • Reconnect to your Yun via SSH on the configured port above
  • Install the openssh-server
    opkg update
    opkg install openssh-server
  • Enable and start OpenSSH server. OpenSSH will listen now on port 22
    /etc/init.d/sshd enable
    /etc/init.d/sshd start
  • Reconnect to your yun via SSH on port 22
  • Now you can disable Dropbear
    /etc/init.d/dropbear disable
    /etc/init.d/dropbear stop
  • Install the openssh-sftp-server package to install support for the SFTP protocol which SSHFS uses
    opkg update
    opkg install openssh-sftp-server
Log into the Yun from the web address. Go to Configure > Advanced Configuration > System > Software and under Installed Packages and remove Dropbear.


22/tcp
High (CVSS: 7.1)
NVT: Dropbear SSH Server Use-after-free Vulnerability (OID: 1.3.6.1.4.1.25623.1.0.105113)
Summary
This host is installed with Dropbear SSH Server and is prone to a use-after-free vulnerability.
Vulnerability Detection Result
Vulnerability was detected according to the Vulnerability Detection Method.
Impact
This flaw allows remote authenticated users to execute arbitrary code and bypass command restrictions via multiple crafted command requests, related to channels concurrency.
Solution
Updates are available.
Affected Software/OS
Versions of Dropbear SSH Server 0.52 through 2011.54 are vulnerable.
Vulnerability Insight
A use-after-free vulnerability exists in Dropbear SSH Server 0.52 through 2011.54 when command restriction and public key authentication are enabled.
Vulnerability Detection Method
Check the version.
Details: Dropbear SSH Server Use-after-free Vulnerability (OID: 1.3.6.1.4.1.25623.1.0.105113)
Version used: $Revision: 809 $
References
CVE: CVE-2012-0920
BID: 52159
Other: http://www.securityfocus.com/bid/52159
https://matt.ucc.asn.au/dropbear/dropbear.html
22/tcp
Medium (CVSS: 5.0)
NVT: Dropbear SSH Server Multiple Security Vulnerabilities (OID: 1.3.6.1.4.1.25623.1.0.105114)
Summary
This host is installed with Dropbear SSH Server and is prone to multiple vulnerabilities.
Vulnerability Detection Result
Vulnerability was detected according to the Vulnerability Detection Method.
Impact
The flaws allows remote attackers to cause a denial of service or to discover valid usernames.
Solution
Updates are available.
Affected Software/OS
Versions prior to Dropbear SSH Server 2013.59 are vulnerable.
Vulnerability Insight
Multiple flaws are due to, - The buf_decompress function in packet.c in Dropbear SSH Server before 2013.59 allows remote attackers to cause a denial of service (memory consumption) via a compressed packet that has a large size when it is decompressed. - Dropbear SSH Server before 2013.59 generates error messages for a failed logon attempt with different time delays depending on whether the user account exists.
Vulnerability Detection Method
Check the version.
Details: Dropbear SSH Server Multiple Security Vulnerabilities (OID: 1.3.6.1.4.1.25623.1.0.105114)
Version used: $Revision: 809 $
References
CVE: CVE-2013-4421, CVE-2013-4434
BID: 62958, 62993
CERT: DFN-CERT-2013-1865 , DFN-CERT-2013-1772
Other: http://www.securityfocus.com/bid/62958
http://www.securityfocus.com/bid/62993
https://matt.ucc.asn.au/dropbear/dropbear.html
Source