Hi! My name is Jasmine Lu.
I AM A(N)...
  • STUDENT.
  • ENGINEER.
  • CREATOR.
  • EXPLORER.

These four descriptors are all very important parts of my identity.
Click on each word to learn a little more about why it is special to me.
I am in my last year of undergraduate studies at Duke University double majoring in Electrical and Computer Engineering AND Computer Science with a minor in Visual Media Studies. On campus, I am in a variety of organizations. I lead a group called Wiring With Women that fosters an inclusive environment for women interested in technology. I am the tech lead and president for a group called AKIN which aims to provide a better platform for campus discussions of social identity issues. I am also a Duke technology scholar and was previously active in our campus' IEEE and Design For America organizations.

Being a student to me means developing a discipline to break down and master methods of understanding how the world works. This includes the physical world, the human world, and everything in between. If money and time were no object, I might not stop taking classes. This rings more true when I'm not feeling stressed about exams/projects, but I always look back after a semester amazed at how much I've learned.


Relevant Links:
I really want to build tools that are useful and impactful. I have always been this way - always driven to invest my time into building something that would 'make a difference'. How I define impact and what type of impact is more deserving of my focus has changed and evolved over the years. Honestly, I am still trying to answer that question. But at the root of trying to figure that out, I know I wish to make a contribution by building tools that might help guarantee a brighter future. I am also very fond of calling myself a 'critical engineer'(manifesto below).

As an engineer, I think a lot about how I wish to grow my technical skills. While my coursework helps me build up fundamentals, I also try to push myself to advance technically with various projects, research, and internships. Currently, I am a doing research under Professor Regis Kopper Ph.D. in XR (Virtual and Augmented Reality... and beyond!). The various engineering projects I'm involved in for my research require me to apply my technical skills to provide solutions for various other fields (Medicine and Archaeology, currently). Some other engineering experiences I've had involve work with Duke's Developing Healthcare Technologies Lab, Engineering World Health Summer Institute, and Google's Engineering Practicum Internship working with Daydream Labs and EarthVR.


Relevant Links:
I don't imagine I will ever be able to stop thinking of my life as my art, and with that, I don't imagine I will ever stop wanting to create for the sake of creating. I find that there is so much power in the act of creating - it allows for a sense of dignity, uniqueness, and meaning that I haven't easily found elsewhere.

Often times, I'm an admirer more than I am a creator - I love watching films, reading all mediums, and absorbing audio and visuals. When I am creating, it is usually in the form of doodles, painting, visual graphics, and writing (which I do on a daily basis). I have a blog that I have kept for over five years which I cherish, but don't wish to link on my public website. I'd love to talk about it though and share what I've learned in keeping it.

I recently started an instagram with some of the things I've created on my iPad Pro which is linked below.

Relevant Links:
I have always had multiple interests. I love exploring history, art, philosophy, politics, etc. along with engineering and technology. Being a constant explorer is something I define myself by. I am known for always asking questions to try to push conversation to explore new and/or often undiscussed aspects of a topic. I always want to be finding myself fascinated by new ways of understanding the world. Exploring keeps me excited about my life and the endless possibilities, but it also helps me feel grounded in my humanity as I have new ways of connecting to people or understanding them better.

Exploring is something I hope I'm engaged in my whole life. I hope to never be static in that aspect and push myself to always explore even if it is not necessarily the safe or conventional thing to do.

I plan to create an interactive visualization that shows how and what I explore, but I'm afraid that's unavailable right now.
Image of me, Jasmine Lu
Tech-splaining

Concept

With the influx of smart assistants and predictive technologies, there are more and more ways for algorithms to establish and reinforce norms, subsequently becoming the dominant influencers and mediators of how we think, act, and respond to our world. This project would aim to explore how smart voice assistants in particular could dictate the sort of conversations we have with each other and with them.

Designed to be interacted with as a conversational agent (i.e. “Hey <device name>”), these smart assistants are meant to feel natural to talk to but are not directly involved in conversations among people. If they were set to become active participants of the conversations that were happening in the rooms they occupied, what would happen? Would people feel comfortable allowing a device to have an equal part in the discussion? What if it started to dominate the conversation?

Playing on the phrase mansplaining, this project aims to showcase what tech-splaining might be like. Since smart assistants are so smart as they are connected to the internet and have access to all sorts of information, they could easily be believed to be the authority on all topics of discussion. This project would set the assistants to assert this belief by interrupting people’s sentences by finishing them or offering up new information about topics from Wikipedia or Google searches.

Early Brainstorm Sketches

Approach

To create this annoying smart assistant device, I will be leveraging Google APIs and SDKs such as Google Cloud Natural Language Processing API, Speech to Text API, and Google Assistant SDK to create software that functions as a Google Assistant normally would, but with the twist of interrupting and offering excessive information.

Currently, I am also exploring how to embed other negative aspects of internet culture and our relationship to our devices into the conversational agent. Such explorations include constantly guiding the conversation towards what's trending on twitter, responding to any pronounced sentiment expression with memes, inducing 'FOMO' by continuously reminding users of what events they are missing out on, pulling out advertisements whenever consumer products are mentioned, and also alerting the user every time President Trump tweets.

This alternative smart assistant will be run on a RaspberryPi connected to a microphone, speaker, and screen. It will be cased in a 3D-printed model that anthropomorphizes the device, giving it a face and big cheesy grin. In addition, I'd like to add LEDs to the eyes for additional expressive avenues (normal versus angry).

Exhibition - COMING SOON! (Dec 7th)

InfiniRing

Concept

Much of current activity tracking relies on either mobile phones coupled with direct user input and context awareness or complex computational models built from sensor data to categorize movements. Within these two paradigms, activity tracking is either limited to a specific subset of activities, not seamlessly integrated in our daily lives, or just plain inaccurate.

This project proposes a different form of activity tracking that can be more naturally embedded in our daily lives and gives power to the user to determine which activities ought to be tracked. InfiniRing explores using a ring wearable that interacts with NFC tag stickers that can be placed on everyday objects and communicates with a mobile application. Such devices could allow for a less intrusive means of collecting data about activities involving contact with the hands. Some examples of use include tracking how often a pill is taken by logging whenever the ring makes contact with a sticker device on the pill bottle or being able to track back to when and where was the last time you were holding your keys.



The ring is able to interact with the stickers and relay information about this interaction to a mobile application while being easy to wear throughout the day. The stickers have NFC tags and can easily be placed on any object of choice. The mobile application receives relevant information everytime an interaction occurred and makes that information easily accessible and understandable, allowing the user to harness their activity data for whatever they wish.

Approach

Ring

To create this ring device, we explored various small microcontrollers to prototype with before settling on Microduinos for their small size and stackable modules. We used a MicroUSB Core Module (for uploading Arduino code to), a NFC Module with Antenna (for reading stickers), a BLE module (for communication with mobile devices), and a battery management module to power it.


In addition to building the hardware and writing the Arduino code, I worked on creating its form. To house the hardware, I designed, modeled, and 3D printed a form factor so that it would be comfortably wearable and functional.On the left are my initial sketches of what that form might look like, and on the right, are adjustments made to fit the specifications of the hardware we worked with.

At the end, our prototype came out looking like this (below). In the future, we'd like to make the hardware much smaller by designing custom circuits and we'd also like to look into integrating haptics to notify the user when a tag had been read.

Stickers

The stickers have NFC tags embedded inside them. Cheap stickers and small NFC tags(even smaller than the size of a penny) that can be taped to objects are sold on Adafruit. These stickers can be reused and attached to any object with minimal cost and obtrusion.

Mobile Device

The mobile app to accompany this allows for users to see every instance their ring interacted with a NFC sticker and categorize what activity to associate with each unique sticker. This app allows people to organize their activity data and also (if they choose to) see the time and place each interaction occurred.
VR Trowel Kit

Concept

One of the primary issues in Virtual Reality (VR) is the difficulty of bringing objects from our ‘real world’ into our virtual environments. In general, our only sense of presence in our virtual environments is based on visuals and VR controllers. These VR controllers are what we use to interact with our virtual environments and play a significant part in determining how we can use VR technologies. Unfortunately, the variety of interactions that can be simulated with controllers are limited by their specific design and form. This project aims to explore one means of creating custom controllers through the use of the SteamVR Hardware Development Kit (HDK) to endow tracking capabilities to any object.

Approach

The custom VR controller I decided to create was a trowel to be used for an application that would allow users to simulate an archeological excavation using data collected from a dig site. Using a trowel for interacting with the data on this dig site would offer unique interaction techniques and increase immersion. The trowel form would enable users to actually scrape against the floor to dig through layers of the virtual excavation. Both having the physical feedback from the scraping and having the form of the trowel also have the potential to improve the user’s sense of immersion. Below are the original trowel shape and some sketches of my design approach.



To create this device, a model of the trowel was 3D-printed. The HDK, which is meant to pair specifically with the HTC Vive VR tracking system lighthouses, was attached to this model to enable sending of tracking information. Twenty-four sensors of the SteamVR HDK were placed on this model to maximize tracking in all potential device poses, which was determined by the SteamVR Tracking simulation software. The device was then further calibrated through communicating with a HTC Vive lighthouse to determine its actual IMU orientation and sensor placement. The design of the trowel was reconfigured after testing to optimize tracking by adding an icosahedron to the model so that there would be additional faces on which to place sensors. Below are visuals of the quality of tracking from the simulation output.

To include controller functionality, I explored using an Adafruit Feather Microcontroller to send button input data.These allowed me to attach a custom button to toggle between modes as well as smaller buttons on the side of the trowel to determine when the trowel was being used to scrape. Using the maker microcontroller for custom inputs was much easier compared to endowing tracking on the object. The final product is pictured below along with some notes on Design Constraints.

Design Constraints

  1. Optimal Sensor Placement - In order for the device to be tracked well, the lighthouses should ‘see’ as many sensors as possible in any orientation. With the shape of the trowel, there are not many sides available to place sensors and half of it is a flat structure. Because of this, certain orientations significantly limit the number of sensors that can be seen subsequently limiting tracking capabilities.
  2. Length of Sensor Flex Cables - In the HDK, the sensor flex cables are four inches in length which limit how far apart sensors can be placed on the module while also causing an excess of flex cable when the sensors can be placed close to the core module.
  3. Similarity to Trowel Shape - To optimize immersion and demonstrate the value of creating a custom controller of unique shape, it was key to prioritize maintaining the shape of the trowel tool so that it was still distinguishable as a trowel.
  4. Areas to Avoid for Sensors - Certain sections such as the grip and the sides of the blade could not have sensors as this would be in conflict with how the device would be used. This further restricted the amount of area in which sensors could be placed.
Akin

Concept

The Problem

The college experience is supposedly marked by an exposure to new ideas and people from communities wildly different from those at home, yet most students find themselves socializing with people of similar background and identity. Evidence of this can be seen in investigations done by The Chronicle on the diversity present in Greek Life and Selective Living Groups, formalized social organizations on Duke’s campus.

When controversial, identity related incidents occur on campus and students are looking to discuss or understand what happened, they have no campus-wide platform to have a productive conversation. For example, in the spring 2018 semester, when a racially-charged incident occurred, a student posted on the Duke Meme Facebook group to address it, which, while creating conversation on the issue, had much of the discussion centering around the relevance of the post to the group which was meant for memes and not nuanced exploration of controversial events.

Many other incidents like this have followed similar patterns where campus-wide conversations are limited by homogeneity of social groups and a lack of appropriate platform. These conversations are difficult and uncomfortable, but increasingly necessary to support a college environment where people feel safe and welcome.

An underlying cycle of microaggressions and lack of communication has become ingrained in many aspects of campus culture, and the effects are felt each day whether or not they go noticed. To date, there have not been any sustainable solutions to combat these issues that arise. We hope that Akin can be one major leap towards making Duke a better and more understanding community.

Our Solution

Akin aims to foster a community for students to converse, learn and share through anonymous online and in person discussion, art and media projects, podcasts, and articles focusing on identity. As an organization, we aim to explore the best approaches to building systems to encourage helpful and healing conversation for Duke’s community.

Approach

Multimedia Projects

Multimedia explorations into identity related topics including, but not limited to, articles, podcasts, videos, art, and photography. Projects aim to spark conversation by highlighting personal narratives and relevant events.

In-person Discussions

Creating spaces for people to have conversations about specific topics where individuals can come to openly share and hear different perspectives while meeting others who are also interested in exploring these topics together.

Anonymous Online Forum

An online platform where people can share views, ask questions, and receive other perspectives about difficult topics without fear of being targeted or embarrassment for their lack of knowledge about a topic.

Personal Contribution

Currently, I am the president of the AKIN student organization and am overseeing the direction and strategy of the organization. I'm in the process of trying to turn this project into a research project so that it can be sustained in the future. My specific contributions recently include the redesign and build of our new site. Over the past few years, I have been involved in building out the anonymous online forum functionality and integrate it with Duke's verification system to ensure these conversations are limited to those within the Duke community. I specifically implemented features to ensure anonymity on the forum and better facilitate discussion. I've also overseen various creative projects and created various graphics for the organization. Some are included below.

Google Daydream Internship Prototypes

Overview

Over the summer of 2017, I had the priviledge of interning at Google Daydream in Mountainview. I interned on two teams - Google Earth VR and Daydream Labs. Google Earth VR is a product team whereas Daydream Labs is an experimental research team. Because of this, I was given the flexibility as an intern to create experimental prototypes in VR/AR using Google Earth VR technologies. Though I am unable to show all the prototypes I did, I can share the following - Paint Earth VR and Earth AR Maps.

Paint Earth VR

Concept

What if the world could be your canvas? In Google Earth VR, you are able to fly and walk around the world and see Earth's geometry as a giant. What if, instead, of just being able to view the world in Virtual Reality, you were able to create in it as well? What if you could bring your virtual experience in Google Earth VR to the real world? These are the ideas behind Paint Earth VR which explores the ability to anotate the world in Earth VR and then see those anotations in 'real life' through a mobile phone with augmented reality.

Approach

Virtual Reality Application
Before this internship, I had never developed with Unity before. With the support and mentorship of my hosts, Rob Jagnow and Stefan Walker, I was able to create a VR application using Unity and the EarthVR Unity plugin (not availably publicly yet) where you can paint in VR. In the VR experience I began by allowing the users to associate a ID to their session. Then, users were able to start to annotate the world through either painting or placing virtual objects. For painting, I created an extendable paintbrush that allowed you to change the size of the brush and the distance at which you were painting in 3D space. As the user painted, the coordinates of the points, color, and size of brush strokes was stored in a firebase account. In addition, while I was building this prototype, the Blocks team had just released their product and were working on an integration to bring Blocks models to Unity. I was able to integrate this integration into my project, creating a new menu where people were also able to view Blocks models and bring them into the VR world - saving their scale, orientation, and coordinates to Firebase.



Augmented Reality - Tango
For the Augmented Reality application, I used a Tango mobile phone. When the user started the application they would be able to select the session they would like to view from a menu populated via the firebase database. Users would also have to set a "North" for the paintings and objects to be rendered as precisely as possible. Once this orientation was set, using the GPS coordinates of the mobile device and calculating its relation to the geographic coordinates stored in firebase, the application would render the paintings and objects in the same place as was determined in the VR application. Then, users would be able to walk around and view their animations as if they were really placed in those spaces in the real world.

Results

Below is a GIF of a demo for the prototype. First you'll see me 'painting' in Virtual Reality above Google's Mountainview Campus creating a Star Wars battle with Blocks models. Then you'll see the view of these models in Augmented Reality through a Tango phone with the models placed in the same location as they were in Virtual Reality.



I completed the prototype in my first month as an intern and was able to present this during Google's Daydream Summit and attended multiple meetings afterward brainstorming potential future explorations into applications along the lines of my prototype. I was also able to demo this to various leads and directors within Google Daydream as well as show it to Clay Bavor, the VP of Google Virtual and Augmented Reality.

In addition, through the process of prototyping, I was able to work with a new integration of Blocks + Unity that was just recently released within Google Daydream to be explored in people's development. During my process of using the integration in my application, I was also able to suggest new features that were implemented by the team.

Earth AR Maps

Concept

Maps help us understand how our environments are organized spatially and are instrumental in allowing us to plan how to navigate and travel different places. Google Maps is a necessity to most people as they go about driving, taking public transport, or even walking in new places. For this prototype, I was interested in exploring how 3D maps could leverage their volumetric data to allow users to better understand their surroundings spatially. Rather than looking at 2D lines for roads and squares for buildings, unique features of different landscapes would be more distinguishable and give users more information about what surrounds them.

Approach

Initially, I explored using an integration for Unity that rendered blocks of Google Maps geometry from a team at Google in Australia. This integration was built with developers who were interested in working on games that were location specific, but I explored how to customize their builds to work for my idea. Using the Google Maps API and some 3D Assets from Google's Poly library, I created a working prototype that allowed users to enter start and end points to generate directions which were then overlayed on the map. Below is the original version.



Soon after, the Google Earth VR Team was able to explore how to render the realistic EarthVR Geometry in AR on both Tango devices and Google's AR Core. I was able to help explore visualization techniques of these Earth VR geometries in AR. You can see an exploration of displaying Mount Shasta and San Francisco (with an animated airplane) below.

From these explorations, I was able to bridge my two prototypes and create a demo of Google Maps Directions on realistic earth geometry. Below are some collected pictures and gifs of this process. This gif shows a demo of the prototype. This shows directions from our Google Daydream Office building on Pear Avenue to the Shoreline Amphitheatre. This also includes the animation of a 3D model car driving from your start and end destination along the designated route. You would be able to place this 3D Map on any surface and anchor it so that it could be viewed from any direction.

Results

With the Earth AR Maps prototypes, I also contributed to various exciting conversations about the potential future explorations both the Google Earth VR and Daydream Labs teams could continue to work on in terms of using the incredibly detailed and realistic Google EarthVR visuals in Augmented Reality with Google's move towards building rich experiences for AR Core.
XR Research at Duke

Overview

I have always been interested in the potentials of VR/AR technologies to open the doors to better harnessing 3D information for work, creativity, and knowledge. After my internship with Google Daydream, I was interested in how I could build bridges to connect my technical experience in VR/AR with solutions in other disciplines. As part of the Duke Immersive Virtual Environments Lab, I worked on various projects advised by Professor Regis Kopper to explore this.

ExcaVR

Concept

When archeologists dig their site, they are uncovering artifacts of the past to preserve. Unfortunately as they dig, there's no way to go back. Instead, the Duke Dig IT Lab employs a variety of technologies to document each layer of the excavation. Though this data is collected meticulously, current platforms for viewing this data lack the flexibility to further analyze these layers in ways that would be helpful to those studying the site.

The aim of the ExcaVR project is to create an open source program that allows for viewing and further study of archeological dig site data in 3D through Virtual Reality. Using the data collected by archeologists at the site for each layer, the site can be recreated layer by layer and preserved digitally. By using virtual reality to present the 3D data and procedurally generating meshes, new means of interpreting and processing the data collected from digs is possible which will be explored in this project. Currently, as the sole student researcher on this project, I have been working closely with archeologists of the Dig@Lab to ensure that the VR application prioritizes functionality that would aid their analysis.

Approach

To analyze the layers of the dig, the current process for archeologists in the Dig@Lab is to use photogrammetry by taking multiple photos of excavation units of interest and using Photoscan to stitch them together and generate height maps based on these photos. The issue lies in how this generated volumetric information is viewed afterward. Currently, the best method of viewing these units are on ArcScene - a proprietary program that allows for "2.5 degree" viewing or seeing these units on a desktop and being able to rotate around to get different perspectives. While this is effective enough to get a sense of the form of these layers, it is limiting in being able to get a good sense of the true size of many features. This is where bringing these layers to Virtual Reality has a lot of promise.



Using the heightmaps generated from the Photoscan software, which come in the form of GeoTiffs (to mark the specific geographic coordinates that these and heightmap data) images are associated with, I build a Unity plugin which would take this information and generate Terrains of the correct size, shape, and texture. To do this, I used the GDAL software to read the GeoTiffs, generate heightmap images, and scale them appropriately. With this information, I could automatically placed in the exact distance in relation to each other so that, if viewed in VR, the user would be able to view the Dig Site at the exact same proportion as how it exists in reality.



In addition to regenerating whole units, most all units come with unique features that are of interest to archeologists. These features are marked out as Shapes in ShapeFiles. I also built out functionality to allow users to view the shape files and cut out the shapes from the terrains, so that they can be viewed separately.

HoloEVD

Concept

In collaboration with Duke BrainTool Lab headed by Dr. Patrick Codd and two medical residents in Duke Neurosurgery, this project explores how Augmented Reality can be used as an aid in neurosurgery. For this exploration, we are focusing on a procedure to place an external ventrical drain. Currently, neurosurgeons use facial landmarks to estimate where to place the drain. This is usually effective, but in cases with abnormal ventricles (especially when they are shifted), this may be insufficient to appropriately place the catheter.

HoloEVD explores how the Hololens can be used in surgical settings to help the surgeon better account for uniqueness in patients when performing surgeries. By overlaying MRI Brain Scan information onto the patient, seeing how the ventricals exist within the patients skull in 3D may allow for more accurate placements. This can potentially have use in a wide variety of surgeries but also as a means of training.

Approach

To explore the interface of this application and run user tests, we used a plasted model of a human head with a hole in the side where neurosurgeons typically drill to insert catheters in the ventricals. To anchor the brain model, we used a vuforia marker image that would be placed right next to the head of the "patient". To track our catheter placement, a magnetic tracking system was used and attached to the catheter.

When the user wears the device to place a catheter, they get multiple sources of visual feedback. First, the model of the ventrical is placed in the correct location on the users head. This model has a yellow dot that designates the target location that the catheter should be left in. In addition, the user gets feedback about how far away the tip of the catheter is from its target location in millimeters. The tip of the catheter is projected inside the model as well and there are blue dots to show the predicted path of the catheter at its current orientation. Last of all, when the catheter tip is inside the ventrical, the ventrical model turns blue.

I worked specifically on building out the interface for the usability studies and creating a system to collect data about accuracy of placement using a "ground truth" measure of accuracy based solely on the magnetic tracking system compared to the accuracy seen through the Hololens (based on the Vuforia marker).

Results

During our usability study with current neurosurgery residents, we found that while the idea had promise, there were certain key challenges that still had to be resolved in order for the project to be helpful for neurosurgeons.
  1. Tracking System - With the Vuforia based tracking on the Hololens, to render the visuals, the visuals can be easily placed slightly off of their true location. It is not the most robust tracking system and even small millimeter differences can be a huge deal to something like placing a catheter in the human brain. If the tracking system is not reliable, an overreliance on this system as information about the ventrical's location can have very negative effects.
  2. Understanding of Spatial Information - One issue when neurosurgeon residents were trying the device was that although the 3D model was being rendered "in space" it was hard to conceptualize exactly where inside the head it was as our visual projected on the outside of the skull while the model resided inside. Future work intends to explore using planar cuts to show how the catheters projection would intersect in the ventrical model. (left image below)
  3. Game-like Interface - For the study, we used fake brains and fake heads to test the interface. This, coupled with the fact that our visuals gave real time feedback (how far you were from the target and whether or not you were inside the ventrical) made many of the neurosurgeons disregard standard protocol for EVD placement and instead perform multiple insertions or reorient the catheter when already inside the brain. (right image below) Instead of doing this, it is generally taught to do one clean direct line of insertion to minimize disruption of neighboring brain tissue. With the extra information provided and the virtual display of information, it may just be that it draws people away from the "real" experience and instead, to a game-like virtual environment.