Pneumatic Augmented Reality Tactile Feedback System (PART FS)
Collaborators: Amery Cong, Sean Kerr, Alejandro Ramirez, Teddy Stoddard
Challenge: Northeastern Senior Capstone
Summary:
Designed a wearable pneumatic electronic system to simulate tactile sensation in a virtual environment. The system uses a high flow diaphragm pump and a series of pressure sensors and valves to inflate individual pouches placed at key sensory locations of the hand on top of a user worn glove. The pressure sensors allow for variable inflation of each pouch corresponding to the force of a virtual collision, enabling users to not only feel the size and shape of a virtual object but also the weight. A microcontroller on the device receives a series of bytes with the inflation level for each pouch from a virtual environment and maintains the desired pressure in each pouch. This lightweight software implementation allows for the device to be easily integrated with any virtual environment and hand tracking tool although the demo showcased here was made in Unity and used the Leap Motion for hand tracking.
Mechanical Design:
A Parker BTC miniature high flow diaphragm pump that can provide up to 30 psi of pressure and 20 in Hg of suction was chosen as the pressure source. Ultimately the pouches provided an adequate sensation at 8 psi however the pump was chosen with oversized specs due to a small project timeline to ensure the system would function properly. Two one way valves were used to connect the positive pressure and negative pressure ends of the pump to ambient. Each end was connected to a high pressure tank and a low pressure tank respectively and these tanks were connected to a series of one way valves (two for each pouch, one to inflate and one to deflate the pouch). One way valves were chosen over two way valves due to their significantly lower cost. Each pouch was connected to a pressure sensor to allow for regulation at different levels of inflation. Barbed fittings were used throughout to allow for easy prototyping and testing and the pressure tanks were made of PVC pipe with end caps and barbed fittings epoxied into a threaded hole. The pouches were made using heat sealed high impact polyethylene. The pouches were placed in between two separate glove liners that were sewn together at the wrist to create one glove with two layers with pneumatic pouches sandwiched in between. Holes were made on the outer liner layer to allow for pneumatic tubing to be routed to the pouches. The gloves are made of a reflective nylon material to allow for easy detection and hand tracking.
Electrical Design:
Due to timeline constraints a standard laptop power block with a 12V DC output was used to power the system, however the part selection was done in a fashion to accommodate being battery powered in future designs. For instance, a series of 5V solenoid valves were chosen to control the airflow into each pouch instead of 12V valves so that a greater variety of consumer batteries could be used to power the system in the next iteration of the design. The entire system was designed to run at 5V instead of 12 for this same reason.
Two voltage regulators were used, to generate separate 5V power rails, a buck converter to generate the rail for the valves and a linear regulator was used for the logic and sensing circuitry. A buck was used to source the valves since the valves are higher power components and can source the necessary current. Each valve is switched by a lowside FET controlled by the microcontroller and parallel to a flyback diode to reduce the noise from inductive spikes. The logic and sensing circuitry was sourced by an LDO, since the LDO does not have the current ripple the buck has and provide less overall noise to the more sensitive components of the system. The valves were isolated on a separate rail to minimize the effect of the inductive noise on the output of the pressure sensors especially since that noise would be amplified before being read into the microcontroller. The pressure sensors output differential voltages over a full scale range of 165 mV which need to be amplified to the 1-3V readable range for the ADC on the microcontroller.
A two stage amplifier was used to ensure a robust readout of the pressure sensor values. First each line of the differential output was run through a voltage buffer then through a differential amplifier with a gain of 10 and a cutoff frequency of 60 Hz. Since the system is completely DC and the sensors provide an analog readout the cutoff frequency was chosen to be as low as possible to try to reduce the changes of ambient noise interfering with the integrity of the sensed values.
Two separate PCB’s were designed for the system, a main board housing the microcontroller, the voltage regulators and the sensing and amplification circuitry, and valve boards populated with a set of solenoids to control 5 individual pouches. This was done to create a better form factor as well as allow for our system to be scalable based on each application. If a desired client wanted a greater number of pouches to obtain a greater tactile resolution they could simply “stack” an additional valve board to our system and connect it with the main board via ribbon cable, this also allows for us to offer a more affordable solution to clients who want a lower tactile resolution (for instance just sensation on the fingertips) and may only need one valve board instead of the system demoed with 3.
An Arduino Mega was used as the microcontroller for this circuit due to its high number of available ADC pins and flexibility for easy prototyping.
Software Design:
The Arduino software was designed to regulate the pouch inflation while minimizing latency. If the latency is too great and there is any perceptible delay between a user picking up a virtual object and when they are provided tactile feedback, the system becomes useless as it does not allow the user to more intuitively engage his virtual environment. The most resource intensive command of the Arduino is the AnalogRead() command that is run for every pouch. The system isn’t looking for patterns in a complex analog signal but is instead looking for analog pressure levels, there is no need to read in large buffers of analog data. Since the ADC read error that may be caused by increasing the Arduino clock is not significant when looking at such a small buffer of data, the Arduino prescaler was changed so the clock would operate at 1 MHz instead of the typical 125 kHz. To minimize calls on the AnalogRead() function instead of reading all of the sensors every loop, a busy flag is set based on a calculated inflate or deflate time based on the pressure difference between the current pressure of the pouch and the target pressure sent to the Arduino in a series of bytes via Serial. When this time is reached, the pressures at the appropriate pouches are checked and compensated for if the pressure has overshot or undershot the desired pressure within margin.
The Arduino code can be easily modified from reading in Serial to reading in Bluetooth or similar wireless schemes, as long as a string of bytes is sent out and assigned to the input buffer of the pressure regulation code of the system, the system should easily interface with other communication protocols for future iterations of the device.
The current demo uses the Leap Motion SDK available for Unity. The demo consists of a series of rendered objects for the user to grasp and manipulate. The Leap Motion tracks the user’s hand and when a digit of each hand or a quadrant of the palm collides with these virtual objects, using Unity’s built in collision engine a byte is populated corresponding to the pouch that should inflate or deflate and the level it should inflate or deflate to. A serial buffer is populated and sent out at a rate of 60 Hz. The system should be able to interface with other virtual environments and gaming engines in future iterations of the device as it is only dependent on a series of bytes sent out based on collisions experienced. Unity was chosen for the demo do to its extensive technical support which would result in a quicker turnaround for this initial demo.
Side profile of PCB stack and inflatable glove
Early breadboard prototype of valve PCBs
Diaphragm pump and pressure tanks
PCB Design for pressure sensing + logic
Logic + Sensing PCB front profile
Logic + Sensing PCB side profile
Glove tracked and rendered in Unity
Tracked hand rendered in Unity
108 W Solar Panels
Collaborators: Eli Abidor and Chris Hickey
Project Dates: 1/5/2014 - 6/10/2014
Summary:
Designed, manufactured and assembled an array of five 132W solar panels for the Northeastern Solar Boat Club. This was done over purchasing commercial panels to optimize for weight and to maximize power delivered to the boat throttle through a maximum power point tracker (MPPT).
Material Selection:
Aluminum honeycomb was chosen for the panel backing due to its light weight and high compression resistance. Compression resistance was sought after to make the panels easier to transport and protect. Tedlar was used as an anti-weathering material and placed under the solar cells to prevent the aluminum from staining. The cells were then sealed to the honeycomb by an EVA encapsulation layer that runs around the edges of the panel. Finally FEP, a corrosion resistant plastic, was wrapped around the panel to protect from debris and allow for easy cleaning. The FEP was used in lieu of glass, typically found on traditional commercial panels, to significantly cut down on weight.
Assembly:
The solar cells are extremely brittle and so equipment had to be designed to reduce human error during assembly, however this had to be done affordably. A rack was made out of sanded plywood to the length of a string of our solar cells, with small walls placed to align each individual cell with the rest of the string. These walls had small notches cut to run solder tabbing between the cells allowing for us to solder the cells in place with minimal error and helped ensure that each panel fabricated would have near identical dimensions and make.
Due to a lack of available resources to cure the EVA in an airtight vacuum, a makeshift assembly had to be designed to prevent air pockets from forming on the panel during the curing process. A hole was drilled in the bottom of a table the panels were to be assembled on. A shop vacuum was attached to this hole and air gaps between the top of the table and the panels were sealed with Capton tape. Capton has a high melting melting point and had a low risk of melting during the curing process. The vacuum created a near airtight seal between the EVA and the aluminum backing. Makeshift heat guns (we had 1 heat gun and removed safety fuses from commercial hair dryers to draw a higher amperage and provide sufficient heat) to evenly cure the EVA and encapsulate the panels.
Electrical Design:
During panel fabrication, the strings of cells were arranged so that the voltage of each panel is slightly higher (about 12.4V) than the boat’s batteries to allow for the highest rate of charge while matched to a 12V system. This would accommodate a future setup where the boat is powered by 12V batteries in parallel for a long distance race event, where a higher speed is less desired than greater overall power. However the motors selected for the current design only operate at voltages greater than 36V, to accommodate for this the panels are joined together by a series of modular switches so they can be set up in different combinations of 12V multiples to still allow for matching with this and other potential higher voltage setup. 12V increments were chosen due to the commercial standards of batteries as well as of available Maximum Power Point Trackers (MPPTs) on the market. The MPPT actively handles fluctuations in panel voltage to provide a steady current to charge the batteries and maximize the power output of the panels.
To prevent hot spot formation on the panels caused by shading 5A bypass diodes were added to each string of cells, additionally blocking diodes were added to the assembly to allow for the panels to remain out at night without draining charge from the batteries.
Our sun lamp test setup to determine the efficiency of the panels
Eli Abidor and I curing the EVA encapsulant.
Solar Boat Throttle and PID Control
Challenge: Solar Splash
Dates: 1/5/2013 - 1/5/2014
Collaborators: Eli Abidor and Josh Johnson
Summary:
Designed an electric throttle to power the drive train and a PID cruise controller to optimize the power consumption of the throttle at a set speed for a race boat in the intercollegiate Solar Splash competition.
Throttle Design:
The Solar Splash guidelines require the use of lead acid batteries and limits battery selection by overall weight, not voltage or battery capacity. Three 12V Optima red top batteries were chosen to power the system in series due to their high cold cranking amperage (CCA) rating of 720 A, fast recharge rate, and due to their robust waterproof design which is largely beneficial for this application. The high CCA is desirable to ensure the battery doesn’t limit the drive train at full throttle for a 70 meter sprint event. Amperage delivered to the drive train is preferred in this event over efficiency so that the maximum power is delivered to the drive train for the boat to hit its top speed.
Three standard 3700 RPM Etek DC electric motors were chosen for the drive train due to their relative affordability for an electric motor with a 15 horsepower max output. Alltrax Axe 4844 motor controllers were chosen to drive the motors due to their rugged design, extensive documentation and relatively high operating frequency of 18 kHz for controllers at an affordable price point. The high operational frequency was desired so that any computationally assisted throttle control could be more responsive. The system is controlled by a 0-5k throttle and polarity protection diodes and surge current fuses were put in place to protect the controllers.
PID Overview:
The Solar Splash competition additionally has an endurance race where the aim is to travel the greatest distance over the course of two hours. During this event the boat is fitted with a 528 W solar panel assembly and a Maximum Power Point Tracker (MPPT) which attempts to match the panels to the boat batteries to improve the charging efficiency of the batteries during the race. The goal of the PID controller is to operate similar to vehicle cruise control by adjusting the current delivered to the motors to maintain a set speed based on data gathered by an accelerometer. This automated system results in a steadier current draw since a driver is more likely to make errors in keeping the throttle steady and overcompensate to reach a desired speed which causes unnecessary spikes in current drawn from the batteries. A steadier current draw is ultimately better for the battery chemistry of the lead acid batteries and improves the efficiency of power used during the race.
A PID feedback controller was chosen since unlike a solely proportional controller it can robustly compensate for steady state errors over time which reduces the chance of the controller oscillating and creates a more stable response to approach the desired speed. Additionally in an actively changing environment due to weather conditions or collisions with other boats, the PID controller responds robustly to sudden and unexpected disturbances and can approach a stable speed again unlike solely proportional or even proportional-integral controllers which can compensate poorly to sudden disturbances and become even more unstable.
PID Controller Design Hardware Design:
The first iteration of the controller was implemented on the Arduino Due for quick turnaround. The TI LM2575 buck was used to bring to bring the voltage of one of the 12V batteries down to 9V for the Arduino power rail. An LDO was used to generate 5V rails for the current sensors, voltage sensors and a LCD screen used as a dashboard by the driver. The data from the current sensor was fed into the Arduino for the PID controller and the Arduino DAC pin output a voltage signal to the throttle control pin on the motor controllers to complete the feedback loop. This was first implemented on a breadboard and is currently implemented on a PCB that is placed on top of the Arduino as a shield. A manual switch is placed between the Arduino throttle output and the 0-5k throttle so that the driver can switch the boat into manual control if the cruise control malfunctions.
PID Controller Software Design:
The running median library was used to work with the median value obtained from a set of samples to minimize noisy fluctuations and ensure the algorithm is working with robust data. The initial constants for the controller were determined experimentally through extensive testing on the Charles River. The proportional, integral and derivative errors are all then calculated based on how far off the current speed the accelerometer reads and a new signal is sent to the motor controllers to compensate for the error. These are recalculated every loop to actively maintain the set speed. The set speed is changed throughout the race based on the readout from an internal timer and the voltage sensors. As the race nears its end, if there is still substantial voltage remaining in the batteries, the set speed is increased to obtain a further distance. All of the data from the event is logged on an SD card and the data is then read into MATLAB to determine changes we can make to the controller constants for a better response for the next race of the event.
Motor controllers and contactors for throttle
Debugging PID controller circuit in shop prior to testing on Charles River
Early breadboard prototype for PID controller
It works!
Lymphedema Screening Using Microsoft Kinect
Project Dates: 9/14/2012 - 6/10/2013
Summary:
Created a prototype to quantify water retention using the Microsoft Kinect as a digital screening and classification system for edema grade 1 - 4 patients. This was done in affiliation with the Northeastern Censsis Lab and the Department of Radiation Oncology at Massachusetts General Hospital for an independent study.
Additional Details:
The goal of the study was to create an affordable and reliable means of quantifying the mass of a Lymphedema afflicted arm. If reliable, the data obtained from the system would allow for doctors to more quickly determine an appropriate treatment plan for each patient. The Microsoft Kinect was chosen due to its relatively affordable price point, extensive software support and imaging capabilities. The Kinect generates a depth field taken by an infrared camera which can be used to estimate of the volume of an arm. Additionally the Kinect is highly portable device which doesn’t require a specialized room to be setup properly. This would allow physicians to bring the setup from patient to patient which creates a greater patient turnover and consequently increases hospital screening availability.
Software Design:
Both a color image and a depth image were extracted from the RGB and IR cameras on the Kinect in C# using the OpenNI libraries. This data was then ported into MATLAB through a MATLAB wrapper for C#. In MATLAB a visual of the depth image of the arm was obtained by formatting the raw data from the Kinect into a 2D image based on the resolution the user set on the Kinect when taking the initial image.
The depth of every point on the arm seen in the image was subtracted from the depth value of a pixel on the arm furthest away from the Kinect to determine the "thickness" of that point. This point was identified by processing the depth image but whose pixel location was also confirmed by looking at the RGB image. The physician was instructed to put a piece of green reflective tape at the midpoint of the arm and instructed each patient to place their arm with their palm facing toward the Kinect. This furthest pixel on the arm was assumed to correspond to the depth of half of the arm, since the back half of the arm couldn't be seen by the Kinect. The volume each pixel represented was calculated as thickness x width x height. The width and height values for each pixel were calculated using scale factors determined experimentally by measuring the dimensions of objects at different distances from the Kinect.
The volume calculated for each pixel was then summed up to obtain the volume for half of the arm. The Kinect was then moved to take a picture of the back half of the users arm. The sum of the volumes obtained by both pictures resulted in an estimated volume of the arm. This was able to determine the volume of a model arm used for experimental trials to an accuracy of 5% and a precision of 1%.
The experimental setup
Arm used as control during early testing
Depth map image
Needle Detector
Project Dates: 6/4/2013 - 12/23/2013
Summary:
Designed a needle detector based on the Hough transform for Philips Healthcare in C++ to isolate a needle and eliminate reflective noise as well as image artifacts from an ultrasound video stream. The implementation was designed to be lightweight so it could be used in real time to assist in invasive procedures such as an ultrasound assisted biopsy or epidural administration.
Additional Details:
During medical procedures when a needle is inserted straight into the body to either obtain a biological sample or inject a drug to an isolated site, the needle is usually oriented in a position close to parallel to the ultrasound transducer. Since the ultrasound beams are not incident to the needle in these procedures, the transducer signal is beam steered to increase the angle of incidence, this increases the visibility of the needle however creates unwanted and often confusing reflections in the ultrasound image.
Algorithm Design:
To eliminate these reflections a filter was created based on the Hough Transform which ranks the linearity of structures in an image and can function as a line detector. The filter was modified to show the full body of the needle which is significantly thicker than a single line. Based on the results from the Hough transform, false positives were filtered out and left only the true needle. The Hough transform processes an image where an edge detection has been performed and so a Sobel edge detector was implemented first. The Sobel edge detector convolved a kernel with each image from the video stream to determine the x and y image pixel gradients. This process shows the pixels locations where there is the greatest change in pixel intensity and filters out pixels from larger more ubiquitous structures which makes the eventual Hough transform significantly less computationally expensive. A gradient image was then written using the sum of the x and the y gradient.
The Hough transform looks at every high valued pixel on the gradient image, representative of the edges and outlines of the original ultrasound image. Every possible line that could go through every high valued pixel in the gradient space is stored in an accumulator space where the image axes are the slope and the slope intercept. The points with highest intensity in the accumulator space correspond to the slopes and the intercepts of the most co-linear lines in the Sobel image. The accumulator space was then filtered to a certain threshold to determine the most co-linear object in the image. The ultrasound image would then only reconstruct the area around the needle based on the Hough transform output.
This algorithm was built and tested in MATLAB then ported and implemented on low end and high end Philips Ultrasound Machines using C++ and optimized using the SIMD libraries.
Image of a needle in a pork roast. The needle and transducer are positioned similarly as to during a biopsy
The beam steered image with noise and needle reflections
The Sobel Edge image
The accumulator space. The areas of brighter intensity have received more votes
The pork roast being scanned with the needle detection feature on
Car Detector
Project Dates: 3/5/2013 - 3/28/2013
Datasets: UIUC Image Database for Car Detection, Caltech 101
Summary:
Designed a program in MATLAB to detect cars in an image based on an image vocabulary generated by a training set of images.
Algorithm Design:
502 training images of the side view of a car from the UIUC car detection dataset functioned as a ground truth since there is a car in the front and center of each image, and the rest of the image is essentially cropped. A Harris corner detector was then used to determine features around each car. 502 sets of features were obtained which could result in too specific a classifier and would add significant processing time. To work with a more manageable feature set for the classifier, an image vocabulary was created using K means clustering, where a series of centroids were found around groups of similar features and these more limited number of centroids (50 in this case) became the restricted image vocabulary used for classification. Weight was added to the likelihood of each word based on how many of the 502 initial images fell in the window around the centroid each word is based on.
A test image with a car from the Caltech 101 dataset is then read in and its features are detected. Votes are cast to see which feature sets in the image most closely match those of the existing image vocabulary. Many features in an image may bear similarities to the car vocabulary so to reduce false positives, the results with the highest votes corresponding to a car were classified as cars and boxes were traced around each detected car. This detector only works assuming there is a car in an image, otherwise it will classify the set of features that most resemble a word from the car vocabulary, as a car. This code was created and implemented in MATLAB
Example of features detected in a training image. The features do not entirely correspond to the one car since there are two cars in the image. Errors caused by this that may occur in the vocabulary generated become more negligible as the amount of training images processed increase.
Detected car boxed in blue. The feature sets with the second and third highest likelihoods of being cars are in green boxes. They often overlap with the car due to overlapping features
False positives detected in tree and building since they are both objects with high corner density and have greater chance of coinciding with the limited vocabulary. The votes cast for car in the image still beat the false positives.
Image Stitching Algorithm
Project Dates: 2/1/2013 - 2/15/2013
Summary:
Designed a program in MATLAB to create a mosaic / panoramic image out of two images. The algorithm takes two images, finds common features between the two then stitches the images together after using a homography to warp the coordinates of one of the images to the coordinate system of the other image.
Algorithm Design:
A Harris corner detector was used to find corner features in the two images. The detector considered sparse sets and evaluated the score of each corner detected to ensure optimal corner detection results. The first image was then divided into several smaller windows of between 100 to 400 pixels. Each window was compared to a pixel set of the same size in the corresponding image using normalized cross correlation. If a set of features from a window in the first image was a 90% or greater match to a set of features in the second image, a correspondence is generated.
After all of the windows have been correlated, the correspondences between the two images are compared. The coordinates of the second image are warped into the coordinate space of the first image using a homography based on an estimative projective geometric transform between the correspondences of the two images. The MATLAB homography function has RANSAC built in to estimate the densest set of correspondences and ignore the outliers while creating the new coordinate set for the second image. The homography of the second image, and the first image are then both written into a new larger image space. This image space contains the stitched image.
First image with corners detected
Second Image Corners Detected
Correspondences between images
Correspondences mapped between images
Homography of 2nd image, mapped to coordinate system of 1st image
Resulting stitched image
Motion Detector
Project Dates: 1/4/2013 - 1/19/2013
Summary:
Designed a frame by frame motion detection program in MATLAB for pre-recorded video for non-real-time applications.
Algorithm Design:
A video is divided into a series of still image frames taken at each second and these images are then stored in a three dimensional array. For every X-Y pixel coordinate on an image, the derivative is found by subtracting the value of the same coordinate position in the next frame from the value of the previous frame. This essentially takes a derivative of all the coordinate values with respect to time across the video.
If the derivative across every set of three images is greater than a predetermined threshold, a white pixel is placed at that coordinate on the image being evaluated. To prevent noisy pixels being displayed due to random fluctuations in intensity between images, a Gaussian smoothing kernel is convolved with each image in the array of images.
Still image from publicly available airport security footage
Second still from airport security video
Motion detected in first still compared to previous and next frame from video
Detected motion of 2nd still in comparison to previous and next frame
Remote Alert Device (RAD)
Challenge: Husky Startup Challenge 2012
Project Dates: 10/15/2012 - 11/15/2012
Summary:
The Remote Alert Device was a design proposal to create a device that translated barking commands taught to ‘Seeing Eye’ dogs trained for the blind to a visual input. The goal of this device was to enable the training of the ‘Seeing Eye’ dogs to also service deaf individuals who require assistive pets due to severe disability.
Additional Details:
Deaf individuals who require service dogs receive dogs who go through a completely different training program than 'Seeing Eye' dogs to visually alert their users of hazards in day to day life. This training is significantly costlier since there are fewer individuals who need assistive pets who are deaf than are blind and also results in limited availability of these pets for individuals in need. The goal of this device was to design a solution that would enable dogs trained in the ‘Seeing Eye’ program to adequately alert deaf individuals of hazards without any additional training.
Device Design:
The RAD consisted of two components, a specialized collar mounted on the dog and a smart watch that the user would wear. The collar was to house two 4.5V batteries (one to be used as an emergency backup) that powered an on board microprocessor, a 3.3V Bluetooth IC and an I2C enabled Piezoelectric sensor. The sensor would pick up vibrations caused by a barking dog and then transmit that data to a user. The microcontroller would implement a bandpass filter to ensure the vibrations picked up are of the dog barking. A 3.3V Bluetooth receiver located in the users watch would then pick up the barking and flash an LED on the watch at a rate and brightness intensity that corresponded to the dog barks. CAD models were created, a part list was drafted and initial code was written for the MSP430 microcontroller before work on the project ceased as it was not picked up by the Husky Startup Challenge for showing low profitability by appealing to a very small target market.