Continuing from experimenting with static magnetic fields and hall effect sensors for positioning, this is an attempt at using optical tracking for the same purpose. Optical tracking is a proven technology used in many VR headsets such as Oculus Rift and PlayStation VR but due to its slow refresh rate limited by the camera’s frame rate, it has to be used with an accelerometer and gyroscope to provide faster updates and optical tracking is used to correct for drift introduce by the IMU. The accuracy of optical tracking depends on many factors such as lighting conditions, image recognition algorithms, processing power, camera, and the design of markers with the later three contributing directly to BOM. So here we look at what could be achieved using a standard laptop webcam and printed QR/ArUco codes.
How many components does it take to switch on a light by voice? Here we look at what’s involved in using a Neopixel strip as a Google Home connected light bulb.
There are multiple ways of issuing commands to Google Assistant like via IFTTT or api.ai but only using Actions SDK directly lets us issue short commands like “Hey Google, lights on” rather than “Hey Google, tell house lights to switch on”. When registering as a ‘Light’, it also lets us use existing built in traits like ‘brightness and colour’ without having to write our own intents and re inventing the wheel.
Ignoring the initial OAuth handshake the flow of data is as follows:
Voice > CHIP/Phone > Google Assistant > https > express > Mosca MQTT > ESP32 > NeoPixel
Connecting to Google Actions requires a publicly accessible https endpoint, but getting proper SSL working on embedded in a home network behind a router is not easy, to say the least. So I opted to create a Dockerized HTTPS to MQTT gateway which will run on a publicly available VPS which will forward the received https traffic via MQTT to connected devices and respond back to the request with the MQTT reply from the device.
It’s been almost 2 years since the Planning poker mobile web app and a lot has changed in the web landscape. WebGL has become a standard in desktop and mobile browsers and there are lot of great frameworks like three.js, babylon.js and x3dom. So I decided to recreate the planning poker app using thee.js. I picked three.js mainly because it seems to be the more popular framework with more documentation and resources available online. It took me awhile to learn all the best ways of doings things and gotchas but ones I did, I was impressed by the performance and how well it worked on mobile.
You can now see the end result at http://chris-gunawardena.github.io/planning-poker/
Full source available at https://github.com/chris-gunawardena/planning-poker
Planning Poker is a estimating technique used by scrum teams to make faster and more accurate estimations using a deck of cards. Now instead of looking for a deck of cards, all you have to do is open http://chris-gunawardena.github.io/planning-poker/ on your mobile phone 🙂
I was really taken up by this timelapse video of a physical scrum board, but unfortunately we had a virtual JIRA board at the time. So using CasperJS/PhantomJS/SpookyJS (headless browser screen capture), NodeJS/Heroku (backend), AngularJS/Bootstrap (fontend), imgur (image hosting) and Uptime Robot (scheduling) I was able to whip up a poor mans timelapse for virtual JIRA boards using only free tiers of services.
Uptime Robot performs a health check on the url /api/projects/take_screenshots every 15 minutes, which is mapped to take_screenshots() function in the projects controller. This function logs into JIRA using CasperJS, navigates to the Scrum/Kanban board and takes a screenshot.
This screenshot then gets uploaded via the imgur.com api and the returned url gets inserted back into mongoDB.
AngularJS retrieves each project with it’s captured screenshot urls to display them in a Bootstrap carousel.
TODO: Create videos from the screenshots and upload to youtube.
This is an open source project, so feel free to fork and send pull requests.
req.uest.info is an open source online debugging tool for capturing and displaying requests from clients in JSON without the need for a server, built using Node, MongoDB, Express, Angular & Socket.io.
The demo http://submit-request.herokuapp.com/ is currently hosted on Heroku. Simply set your form or application submit url to http://submit-request.herokuapp.com/submit/ and view the submission data.
When a REST request is made to /submit url the node express route/handler re-transmits/pushes the data to all connected clients using socket.io emit.
This message is handled on the client side by the angular socket.io service.
On row click, the data is passed through a json syntax highlighter and displayed.
TODO: Currently requests are visible to all by default, I’m working on allowing logged in users to make private requests from a set of IP addresses. The user authenticating and IP address management is already implemented using MongoDB and passport.js. What’s left is to search for the ip address when a request arrives, then if a match is found, send the request only to the users connected client. (ip address of each logged in client will have to be stored with their socket connection).
Feel free give feedback in the comments below or to submit a pull request/fork.
When a story is ‘done’ and ‘on the bus’, it’s expected to stay that way until go live. When working in a large team that share a code base, it’s common to break each others stuff.
Automated regression testing has long existed in most applications but for many web sites because it’s impossible to write tests to verify if a website was being displayed correctly. Today opensource tools like Wraith, Huxley, PhantomCSS, PhotoBox etc solve this issue using screencapture comparison techneques.
What will this post help me do?
Automate website regression testing for agile teams that use JIRA. This will prevent you from committing code that can undo previously completed stories.
JIRA stories are used to add test URLs. Every team member can view them which help increase test coverage.
This started out as a quick project to interactively showcase my mobile portfolio, but turned into a project on it’s own taking a few days to complete. This allows you to embed a mobile site at any size while maintaining the scale of its contents.
Github project page:
Continue reading for an example.
This mobile web app was created to be device and platform agnostic. It currently works on iOS, Android, Windows Phone 8.1 and any desktop browser without needing to download and install from an app store.
Agile Planning Poker is a estimating technique used by scrum teams to make faster and more accurate estimations using a deck of cards. Now instead of looking for a deck of cards, all you have to do is open http://planning-poker.net/v1/ on your mobile phone 🙂
Source code available at https://github.com/chris-gunawardena/planning-poker/tree/v1-jquery-iscroll
Continue reading for a demo.
Web development has long been an ad hoc process and the trend has been to clone or download various libraries/frameworks into folders depending on your programming background and never touch them again. Most don’t minify, lint or combine js/css file for the sake of readability and maintainability. Thanks to new developments in web technologies this doesn’t have to be a Friday night argument at the pub along with where to place the opening brackets and how many spaces/tabs should be used for indenting.