Viewing entries tagged
Physical Sciences

Machine Minds: An Exploration of Artificial Neural Networks


Machine Minds: An Exploration of Artificial Neural Networks


An artificial neural network is a computational method that mirrors the way a biological nervous system processes information. Artificial neural networks are used in many different fields to process large sets of data, often providing useful analyses that allow for prediction and identification of new data. However, neural networks struggle with providing clear explanations regarding why certain outcomes occur. Despite these difficulties, neural networks are valuable data analysis tools applicable to a variety of fields. This paper will explore the general architecture, advantages and applications of neural networks.


Artificial neural networks attempt to mimic the functions of the human brain. Biological nervous systems are composed of building blocks called neurons. In a biological nervous system, biological neurons communicate with axons and dendrites. When a biological neuron receives a message, it sends an electric signal down its axon. If this electric signal is greater than a threshold value, the electrical signal is converted to a chemical signal that is sent to nearby biological neurons.2 Similarly, while artificial neural networks are dictated by formulas and data structures, they can be conceptualized as being composed of artificial neurons, which hold similar functions to their biological counterparts. When an artificial neuron receives data, if the change in the activation level of a receiving artificial neuron exceeds a defined threshold value, the artificial neuron creates an output signal that propagates to other connected artificial neurons.2 The human brain learns from past experiences and applies this information from the past in new settings. Similarly, artificial neural networks can adapt their behaviors until their responses are both accurate and consistent in new situations.1

While artificial neural networks are structurally similar to their biological counterparts, artificial neural networks are distinct in several ways. For example, certain artificial neural networks send signals only at fixed time intervals, unlike biological neural networks, in which neuronal activity is variable.3 Another major difference between biological neural networks and artificial neural networks is the time of response. For biological neural networks, there is often a latent period before a response, whereas in artificial neural networks, responses are immediate.3

Neural networks are useful in a wide-range of fields that involve large datasets, ranging from biological systems to economic analysis. These networks are practical in problems involving pattern recognition, such as predicting data trends.3 Neural networks are also effective when data is error-prone, such as in cognitive software like speech and image recognition.3

Neural Network Architecture:

One popular neural network design is the Multilayer Perceptrons (MLP) design. In the MLP design, each artificial neuron outputs a weighted sum of its inputs based on the strength of the synaptic connections.1 Artificial neuron synaptic strength is determined by the formulaic design of the neural network and is directly proportional to weight: stronger and more valuable artificial neurons have a larger weight and therefore are more influential in the weighted sum. The output of the neuron is based on whether the weighted sum is greater than the threshold value of the artificial neuron.1 The MLP design was originally composed of perceptrons. Perceptrons are artificial neurons that provide a binary output of zero or one. Perceptrons have limited use in a neural network model because small changes in the input can drastically alter the output value of the system. However, most current MLP systems use sigmoid neurons instead of perceptrons. Sigmoid neurons can take inputs and produce outputs of values between zero and one, allowing for more variation in the inputs because these changes do not radically alter the outcome of the model.4

In terms of the architecture of the MLP design, the network is a feedforward neural network.1 In a feedforward design, the units are arranged so signals travel exclusively from input to output. These networks are composed of a layer of input neurons, a layer of output neurons, and a series of hidden layers in between the input and output layers. These hidden layers are composed of internal neurons that further process the data within the system. The complexity of this model varies with the number of hidden layers and the number of inputs in each layer.1

In an MLP design, once the number of layers and the number of units in each layer are determined, the threshold values and the synaptic weights in the system need to be set using training algorithms so that the errors in the system are minimized.4 These training algorithms use a known dataset (the training data) to modify the system until the differences between the expected output and the actual output values are minimized.4 Training algorithms allow for neural networks to be constructed with optimal weights, which lets the neural network make accurate predictions when presented with new data. One such training algorithm is the backpropagation algorithm. In this design, the algorithm analyzes the gradient vector and the error surface in the data until a minimum is found.1 The difficult part of the backpropagation algorithm is determining the step size. Larger steps can result in faster runtimes, but can overstep the solution; comparatively smaller steps can lead to a much slower runtime, but are more likely to find a correct solution.1

While feedforward neural network designs like MLP are common, there are many other neural network designs. These other structures include examples such as recurrent neural networks, which allow for connections between neurons in the same layer, and self-organizing maps, in which neurons attain weights that retain characteristics of the input. All of these network types also have variations within their specific frameworks.5 The Hopfield network and Boltzmann machine neural network architectures utilize the recurrent neural network design.5 While feedforward neural networks are the most common, each design is uniquely suited to solve specific problems.


One of the main problems with neural networks is that, for the most part, they have limited ability to identify causal relationships explicitly. Developers of neural networks feed these networks large swathes of data and allow for the neural networks to determine independently which input variables are most important.10 However, it is difficult for the network to indicate to the developers which variables are most important in calculating the outputs. While some techniques exist to analyze the relative importance of each neuron in a neural network, these techniques still do not present as clear of a causal relationship between variables as can be gained in similar data analysis methods such as a logistic regression.10

Another problem with neural networks is the tendency to overfit. Overfitting of data occurs when a data analysis model such as a neural network generates good predictions for the training data but worse ones for testing data.10 Overfitting happens because the model accounts for irregularities and outliers in the training data that may not be present across actual data sets. Developers can mitigate overfitting in neural networks by penalizing large weights and limiting the number of neurons in hidden layers.10 Reducing the number of neurons in hidden layers reduces overfitting but also limits the ability of the neural network to model more complex, nonlinear relationships.10


Artificial neural networks allow for processing of large amounts of data, making them useful tools in many fields of research. For example, the field of bioinformatics relies heavily on neural network pattern recognition to predict various proteins’ secondary structures. One popular algorithm used for this purpose is Position Specific Iterated Basic Local Alignment Search Tool (PSI-BLAST) Secondary Structure Prediction (PSIPRED).6 This algorithm uses a two-stage structure that consists of two three-layered feedforward neural networks. The first stage of PSIPRED involves inputting a scoring matrix generated by using the PSI-BLAST algorithm on a peptide sequence. PSIPRED then takes 15 positions from the scoring matrix and uses them to output three values that represent the probabilities of forming the three protein secondary structures: helix, coil, and strand.6 These probabilities are then input into the second stage neural network along with the 15 positions from the scoring matrix, and the output of this second stage neural network includes three values representing more accurate probabilities of forming helix, coil, and strand secondary structures.6

Neural networks are used not only to predict protein structures, but also to analyze genes associated with the development and progression of cancer. More specifically, researchers and doctors use artificial neural networks to identify the type of cancer associated with certain tumors. Such identification is useful for correct diagnosis and treatement of each specific cancer.7 These artificial neural networks enable researchers to match genomic characteristics from large datasets to specific types of cancer and predict these types of cancer.7 (What to put in/ what to get out/process)

In bioinformatic scenarios such as the above two examples, trained artificial neural networks quickly provide high-quality results for prediction tasks.6 These characteristics of neural networks are important for bioinformatics projects because bioinformatics generally involves large quantities of data that need to be interpreted both effectively and efficiently.6

The applications of artificial neural networks are also viable within fields outside the natural sciences, such as finance. These networks can be used to predict subtle trends such as variations in the stock market or when organizations will face bankruptcy.8,9 Neural networks can provide more accurate predictions more efficiently than other prediction models.9


Over the past decade, artificial neural networks have become more refined and are being used in a wide variety of fields. Artificial neural networks allow researchers to find patterns in the largest of datasets and utilize the patterns to predict potential outcomes. These artificial neural networks provide a new computational way to learn and understand diverse assortments of data and allow for a more accurate and effective grasp of the world.


  1. Taiwo Oladipupo Ayodele (2010). Types of Machine Learning Algorithms, New Advances in Machine Learning, Yagang Zhang (Ed.), InTech, DOI: 10.5772/9385. Available from:
  2. Neural Networks: An Introduction By Berndt Muller, Joachim Reinhardt
  3. Urbas, John V. Article
  4. : Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015 
  5. Elements of Artificial Neural Networks by Kishan Mehrotra, Chilukuri Mohan
  6. Neural Networks in Bioinformatics by Ke Chen, Lukasz A. Kurgan
  7. Artificial Neural Networks in the cancer genomics frontier by Andrew Oustimov, Vincent Vu
  8. An enhanced artificial neural network for stock price predictions by Jiaxin Ma
  9. A comparison of artificial neural network model and logistics regression in prediction of companies’ bankruptcy by Ali Mansouri
  10. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes by Jack V. Tu


Optimizing Impulse and Chamber Pressure in Hybrid Rockets


Optimizing Impulse and Chamber Pressure in Hybrid Rockets


Hybrid rockets—rockets that use a liquid oxidizer and solid fuel cylindrical grains—are currently experiencing a resurgence in research rocketry due to their comparative safety benefit.1 The unique design of a hybrid rocket enables fuel and oxidizer input regulation, and thus modulation of the combustion chamber pressure.2 This reduces the risk of explosion.3 This paper will give a basic overview of the function of a hybrid rocket, the role of injector plate geometry and rocket fuel on thrust, and the results of the Rice Eclipse research team on studying the effect of injector plate geometries and rocket fuel combinations on thrust and impulse. The purpose of this research is to discover a fuel grain and injector plate combination with the thrust necessary to launch a hybrid rocket into suborbital space.

Solid Rockets

Most entry-level, low-hazard rockets use solid motors.4 Solid rockets are generally considered to be the safest option because of the consistent burn profile.5 These rockets have a solid cylinder of fuel in their combustion chamber that contains a blend of rocket fuel and oxidizer.5 Through the course of flight, the fuel/oxidizer blend gradually depletes like a high power candle until the rocket reaches its apogee.5 Since the fuel and oxidizer are initially mixed together, it is highly unlikely for a solid rocket to have a concentration of fuel necessary for instantaneous combustion, which would result in an explosion.5

Liquid Rockets

Typical rockets that are deployed in space are liquid rockets.6 These rockets contain tanks of liquid oxidizer and liquid fuel that are atomized in the combustion chamber to burn at the high efficiencies required to achieve the impulse necessary for escape velocity.7 Particularly, the atomization provides the high surface area-volume ratio that is necessary for an efficient burn and allows the rocket to have the extremely high thrust. The disadvantage of liquid rockets is the huge safety risk they pose.7 Having a liquid combustion system makes the oxidizer and fuel dangerously close to blending, which can create a concentration of oxidizer-fuel mixture susceptible to a spark and resultant explosion.

Hybrid Rockets

Hybrid rockets combine the best of both solid and liquid rockets.6 The liquid oxidizer of the hybrid rocket is atomized over the solid fuel to give a high-thrust yet controlled burn in the combustion chamber.2 Although the sophistication of hybrid rocket engineering prevents most novice rocket builders from constructing hybrids, Rice Eclipse has constructed the fifth amateur hybrid rocket in America—which we call the MK1.

Injector Plates

Injector plates are metallic structures that function like spray guns and divide the stream of oxidizer into thousands of small atomized parts.8 A variety of designs or geometries exist that serve to break up oxidizer flow; the designs we considered in this study are the showerhead and impinging designs.


Showerhead injectors function similarly to household showerheads.4 A series of radially placed holes taper inwards as they move through the injector plate, confining the oxidizer fluid to a very small space before releasing it as a spray in the combustion chamber.8 The fluid atomizes because the oxidizer accelerates as it travels through the constrained small holes but suddenly decelerates as it reaches into the combustion chamber due to the rapid change in pressure.8 This process of breaking up liquid streams due to sudden resistance to flow is called the venturi effect.8

Impinging Plates

The second type of injector plate studied is an impinging injector plate4. In this style of injector plate, the holes of the plate are placed facing one another.9 As the oxidizer flows through the holes of the plate, the streams impinge, or collide at a central location.9 Upon collision, the streams atomize.4

It is hypothesized that this plate structure should result in much better performance because of greater atomization compared to a corresponding showerhead plate.4 For this project, the angle of the impinging holes was chosen to be 30 degrees from the normal in order to optimize impingement and atomization at the end of the pre-combustion chamber.9

Fuel Grains

Rocket fuels are often made of various materials that complement each other’s chemical properties to produce a high efficiency burn.10 These fuel components are held together in a cylindrical grain through the use of a binder compound that is also consumed in combustion11 Therefore, it is important for both the standard fuel components and the binder to burn efficiently.11 The efficiency of a burn is quantified in the fuel regression rate, which is how fast the fuel grain is depleted.12 While this rate varies based on combustibility and other chemical properties, it also heavily depends on the surface area available for burning.12 Fuels with high surface area, like those in a liquid or gaseous state, can achieve high regression rates.12 Thus, hybrid and solid rocket enthusiasts have been attempting to develop high surface area grains for efficient burns; this is has been previously achieved by using exotic grain configurations designed to maximize the exposure of the grain.12 Rice Eclipse has taken the different approach by using a standard cylindrical fuel grain that incorporate high regression rate liquefying paraffin with conventional solid rocket fuel. These fuel grains were combusted with a nitrous oxide oxidizer.

Paraffin Fuel

Hydroxyl-terminated polybutadiene (HTPB), is the most commonly used rocket fuel for both hybrid and solid rocket motors.13 In solid rockets, the physical properties of HTPB make it an ideal chemical to both bind the oxidizer into a strong yet elastic fuel grain and serve as source of fuel.12 However, HTPB does not burn with efficiencies required to accelerate rockets into orbital velocities.14 To improve pure HTPB grains, researchers have experimented with the addition of paraffin, a waxy compound that burns with a higher regression rate than HTPB, in the fuel grain.15 Under the high temperatures of the combustion chamber, solid paraffin wax forms a thin layer of low surface tension liquid on the face of the fuel grain cylinder that is exposed to the oxidizer.16 The layer of liquid vaporizes due to the high flow rate and pressure of the oxidizer, producing the large surface-area-to-volume ratio that is common in solid and liquid rockets.16 This liquefaction phenomena allows paraffin to produce high regression rate fuels in both hybrid and solid motors.16 However, paraffin by itself cannot be molded into a fuel grain due to its low viscosity.16 Thus, the inclusion of HTPB enables the production of a moldable fuel grain that possesses the high regression rate of paraffin wax.17

Materials and Methods

These tests were conducted in Houston, Texas in the MK1 test motor. The maximum combustion chamber pressure of MK1 was set to 500 psi. The motor used a load cell for thrust measurements and an internal pressure sensor for the combustion chamber profile. Each test fire lasted for four seconds, and three fires were conducted per configuration to ensure reproducibility and consistency of data.

We tested two types of fuel grains with HTPB and paraffin grains at 0% paraffin/100% HTPB and 50% paraffin/50% HTPB. All of these tests utilized a nitrous oxide oxidizer. Each of these grain types were cast in the Rice University, Oshman Engineering Design Kitchen.

The injector plates were made out of stock steel and were machined in the Rice University, Oshman Engineering Design Kitchen. The values used to drive the design of the injector plate are the desired mass flow rate of the oxidizer: 0.126 kg/s and the desired pressure drop across the injector plate: 1.72 MPa.

Graphite nozzles with an entrance diameter of 1.52 in, a throat diameter of 0.295 in, and an exit diameter of 0.65 in were used. Each nozzle is 1.75 in long and has a converging half angle of 40 degrees and a diverging half angle of 12 degrees.


Three different fuel and injector plate combinations were studied. We performed a base case test of 0% paraffin/100% HTPB in a shower head plate. We then studied the effect of adding an impinging plate to the 0% paraffin/100% HTPB grain and went on to test a 50% paraffin/50% HTPB on the shower head palate. The reason we tested these configurations is to see how having a paraffin blended fuel grain and adding an impinging plate independently affected our rocket performance. The three scatter plots below show the thrust from each of the grains during a test fire. Thrust has a directly proportional relationship to the specific impulse of the rocket.


50% Paraffin Test

The 50% paraffin grain showed a significant improvement compared to the 0% paraffin base case, increasing the average thrust by 58% from 380 lbf to about 600 lbf. The paraffin fuel grain also improved the consistency of the burn due to the even spread of the paraffin grains in the fuel. Although chamber pressure did increase from about 23 psi to 38 psi, this increase in pressure is well below the 50 psi operating capacity of the rocket and would not be a handicap for the fuel grain.

Impinging Plate

The third test fire, which demonstrated the impinging plate, maintained an average thrust of 700 lbf at maximum capacity—the highest average thrust. This is because the impinging injector plate increases the atomization of the oxidizer and the surface area available for combustion, intensifying the resulting burn. This increase in burn efficiency also reduces the overall burn time of the fuel and in this case shortened the fire to about two seconds from a four second burn in the base case.


The data show that the impinging injector was successful at achieving higher thrust burn. The paraffin fuels also demonstrated improved performance from the traditional HTBP fuel grains. This improvement in performance likely results from the reduced energy barrier to vaporization in the paraffin fuels compared to HTPB. The combination of improved vaporization and atomization allowed the impinging injector plate test results to show significantly better maximum thrust than all other tested plate combinations. Future testing can focus on combining the impinging plate with different concentrations of paraffin to take full advantage of increased atomization and surface area.


  1. Spurrier, Zachary (2016) "Throttleable GOX/ABS Launch Assist Hybrid Rocket Motor for Small Scale Air Launch Platform". All Graduate Theses and Dissertations, 1, 1-72.
  2. Alkuam, E. and Alobaidi, W. (2016) Experimental and Theoretical Research Review of Hybrid Rocket Motor Techniques and Applications. Advances in Aerospace Science and Technology, 1, 71-82.
  3. Forsyth, Jacob Ward, (2016) "Enhancement of Volumetric Specific Impulse in HTPB/Ammonium Nitrate Mixed Hybrid Rocket Systems". All Graduate Plan B and other Reports, 876, 1-36.
  4. European Space Agency, (2017) "Solid and Liquid Fuel Rockets".
  5. Whitmore S.A., Walker S.D., Merkley D.P., Sobbi M,  (2015) “High regression rate hybrid rocket fuel grains with helical port structures”, Journal of Propulsion and Power, 31, 1727-1738.
  6. Thomas J. Rudman, (2002) “The Centaur Upper Stage Vehicle”, International Conference on Launcher Technology-Space Launcher Liquid Propulsion, 4, 1-22.
  7. D. K. Barrington and W. H. Miller, (1970) "A review of contemporary solid rocket motor performance prediction techniques", Journal of Spacecraft and Rockets, 7, 225-237.
  8. Isakowitz, Steven J  International Reference Guide to Space Launch Systems; American Institute of Aeronautics and Astronautics: Washington D.C., 1999;
  9. Benjamin Waxman, Brian Cantwell, and Greg Zilliac, (2012) "Effects of Injector Design and Impingement Techniques on the Atomization of Self-Pressurizing Oxidizers", 48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Joint Propulsion Conferences, 6, 1-12
  10. Silant'yev, A.I. Solid Rocket Propellants Defense Technical Information Center [Online], August 22, 1967, (accessed Feb. 9, 2017).
  11. F. M. Favaró, W. A. Sirignano, M. Manzoni, and L. T. DeLuca.  (2013) "Solid-Fuel Regression Rate Modeling for Hybrid Rockets", Journal of Propulsion and Power, 29, 205-215
  12. Lengellé, G., Duterque, J., and Trubert, J.F. (2002) “Combustion of Solid Propellants” North atlantic Treaty Organization Research and Technology Organization Educational Notes, 23, 27-31.
  13. Dario Pastrone, (2012) “Approaches to Low Fuel Regression Rate in Hybrid Rocket Engines” International Journal of Aerospace Engineering, 2012,1-12.
  14. Boronowsky, Kenny Michael, (2011) "Non-homogeneous Hybrid Rocket Fuel for Enhanced Regression Rates Utilizing Partial Entrainment" San Jose State University Master's Theses. Paper 4039, 3-110.
  15. McCulley, Jonathan M, (2013) "Design and Testing of Digitally Manufactured Paraffin Acrylonitrile-Butadiene-Styrene Hybrid Rocket Motors"All Graduate Theses and Dissertations from Utah State University, 1450, 1-89.
  16. T Chai “Rheokinetic analysis on the curing process of HTPB-DOA- MDI binder system”  Institute of Physics: Materials Science and Engineering, 147, 1-8.
  17. Sutton, G. Rocket propulsion elements; New York: John Wiley & Sons, 2001
  18. Isakowitz, Steven J  International Reference Guide to Space Launch Systems; American Institute of Aeronautics and Astronautics: Washington D.C., 1999



Wearable Tech is the New Black


Wearable Tech is the New Black

What if our clothes could detect cancer? That may seem like a far fetched, “only applicable in a sci-fi universe” type of concept, but such clothes do exist and similar devices that merge technology and medicine are actually quite prominent today. The wearable technology industry, a field poised to grow to $11.61 billion by 20201, is exploding in the healthcare market as numerous companies produce various devices that help us in our day to day lives such as wearable EKG monitors and epilepsy detecting smart watches. Advancements in sensor miniaturization and integration with medical devices have greatly opened this interdisciplinary trade by lowering costs. Wearable technology ranging from the Apple Watch to consumable body-monitoring pills can be used for everything from health and wellness monitoring to early detection of disorders. But as these technologies become ubiquitous, there are important privacy and interoperability concerns that must be addressed.

Wearable tech like the Garmin Vivosmart HR+ watch uses sensors to obtain insightful data about its wearer’s health. This bracelet-like device tracks steps walked, distance traveled, calories burned, pulse, and overall fitness trends over time.2 It transmits the information to an app on the user’s smartphone which uses various algorithms to create insights about the person’s daily activity. This data about a person’s daily athletic habits is useful to remind them that fitness is not limited to working out at the gym or playing a sport--it’s a way of life. Holding tangible evidence of one’s physical activity for the day or history of vital signs empowers patients to take control of their personal health. The direct feedback of these devices influences patients to make better choices such as taking the stairs instead of the elevator or setting up a doctor appointment early on if they see something abnormal in the data from their EKG sensor. Connecting hard evidence from the body to physical and emotional perceptions refines the reality of those experiences by reducing the subjectivity and oversimplification that feelings about personal well being may bring about.

Not only can wearable technology gather information from the body, but these devices can also detect and monitor diseases. Diabetes, the 7th leading cause of death in the United States,3 can be detected via AccuCheck, a technology that can send an analysis of blood sugar levels directly to your phone.4 Analysis software like BodyTel can also connect patients with doctors and other family members who would be interested in looking at the data gathered from the blood test.5 Ingestible devices such as the Ingestion Event Marker take monitoring a step further. Designed to monitor medication intake, the pills keep track of when and how frequently patients take their medication. The Freescale KL02 chip, another ingestible device, monitors specific organs in the body and relays the organ’s status back to a Wi-Fi enabled device which doctors can use to remotely measure the progression of an illness. They can assess the effectiveness of a treatment with quantitative evidence which makes decision-making about future treatment plans more effective.

Many skeptics hesitate to adopt wearable technology because of valid concerns about accuracy and privacy. To make sure medical devices are kept to the same standards and are safe for patient use, the US Food and Drug Administration (FDA) has begun to implement a device approval process. Approval is only granted to devices that provably improve the functionality of traditional medical devices and do not pose a great risk to patients if they malfunction.6In spite of the FDA approval process, much research is needed to determine whether the information, analysis and insights received from various wearable technologies can be trusted.

Privacy is another big issue especially for devices like fitness trackers that use GPS location to monitor user behavior. Many questions about data ownership (does the company or the patient own the data?) and data security (how safe is my data from hackers and/or the government and insurance companies?) are still in a fuzzy gray area with no clear answers.7 Wearable technology connected to online social media sites, where one’s location may be unknowingly tied to his or her posts, can increase the chance for people to become victims of stalking or theft. Lastly, another key issue that makes medical practitioners hesitant to use wearable technology is the lack of interoperability, or the ability to exchange data, between devices. Data structured one way on a certain wearable device may not be accessible on another machine. Incorrect information might be exchanged, or data could be delayed or unsynchronized, all to the detriment of the patient.

Wearable technology is changing the way we live our lives and understand the world around us. It is modifying the way health care professionals think about patient care by emphasizing quantitative evidence for decision making over the more subjective analysis of symptoms. The ability for numeric evidence about one’s body to be documented holds people accountable for the actions. Patients can check to see if they meet their daily step target or optimal sleep count, and doctors can track the intake of a pill and see its effect on the patient’s body. For better or for worse, we won’t get the false satisfaction of achieving our fitness goal or of believing in the success of a doctor’s recommended course of action without tangible results. While we have many obstacles to overcome, wearable technology has improved the quality of life for many people and will continue to do so in the future.


  1. [Hunt, Amber. Experts: Wearable Tech Tests Our Privacy Limits. (accessed Oct. 24, 2016).
  2. Vivosmart HR+. (accessed Oct. 31, 2016).
  3. Statistics about Diabetes. (accessed Nov. 1, 2016).
  4. Accu-Chek Mobile. (accessed Oct. 31, 2016).
  5. GlucoTel. (accessed Oct. 31, 2016)
  6. Mobile medical applications guidance for industry and Food and Drug Administration staff. U. S. Food and Drug Administration, Feb. 9, 2015. (accessed Oct. 17, 2016).
  7. Meingast, M.; Roosta, T.; Sastry, S. Security and Privacy Issues with Health Care Information Technology. (accessed Nov. 1, 2016).


Algae: Pond Scum or Energy of the Future?


Algae: Pond Scum or Energy of the Future?

In many ways, rising fuel demands indicate positive development--a global increase in energy accessibility. But as the threat of climate change from burning fuel begins to manifest, it spurs the question: How can the planet meet global energy needs while sustaining our environment for years to come? While every person deserves access to energy and the comfort it brings, the population cannot afford to stand by as climate change brings about ecosystem loss, natural disaster, and the submersion of coastal communities. Instead, we need a technological solution which will meet global energy needs while promoting ecological sustainability. When people think of renewable energy, they tend to picture solar panels, wind turbines, and corn-based ethanol. But what our society may need to start picturing is that nondescript, green-brown muck that crowds the surface of ponds: algae.

Conventional fuel sources, such as oil and coal, produce energy when the carbon they contain combusts upon burning. Problematically, these sources have sequestered carbon for millions of years, hence the term fossil fuels. Releasing this carbon now increases atmospheric CO2 to levels that our planet cannot tolerate without a significant change in climate. Because fossils fuels form directly from the decomposition of plants, live plants also produce the compounds we normally burn to release energy. But, unlike fossil fuels, living biomass photosynthesizes up to the point of harvest, taking CO2 out of the atmosphere. This coupling between the uptake of CO2 by photosynthesis and the release of CO2 by combustion means using biomass for fuel should not add net carbon to the atmosphere.1 Because biofuel provides the same form of energy through the same processes as fossil fuel, but uses renewable resources and does not increase atmospheric carbon, it can viably support both societal and ecological sustainability.

If biofuel can come from a variety of sources such as corn, soy, and other crops, then why should we consider algae in particular? Algae double every few hours, a high growth rate which will be crucial for meeting current energy demands.2 And beyond just their power in numbers, algae provide energy more efficiently than other biomass sources, such as corn.1 Fat composes up to 50 percent of their body weight, making them the most productive provider of plant oil.3,2 Compared to traditional vegetable biofuel sources, algae can provide up to 50 times more oil per acre.4 Also, unlike other sources of biomass, using algae for fuel will not detract from food production. One of the primary drawbacks of growing biomass for fuel is that it competes with agricultural land and draws from resources that would otherwise be used to feed people.3 Not only does algae avoid this dilemma by either growing on arid, otherwise unusable land or on water, but also it need not compete with overtaxed freshwater resources. Algae proliferates easily on saltwater and even wastewater.4 Furthermore, introducing algae biofuel into the energy economy would not require a systemic change in infrastructure because it can be processed in existing oil refineries and sold in existing gas stations.2

However, algae biofuel has yet to make its grand entrance into the energy industry. When oil prices rose in 2007, interest shifted towards alternative energy sources. U.S. energy autonomy and the environmental consequences of carbon emission became key points of discussion. Scientists and policymakers alike were excited by the prospect of algae biofuel, and research on algae drew governmental and industrial support. But as U.S. fossil fuel production increased and oil prices dropped, enthusiasm waned.2

Many technical barriers must be overcome to achieve widespread use of algae, and progress has been slow. For example, algae’s rapid growth rate is both its asset and its Achilles’ heel. Areas colonized by algae can easily become overcrowded, which blocks access to sunlight and causes large amounts of algae to die off. Therefore, in order to farm algae as a fuel source, technology must be developed to regulate its growth.3 Unfortunately, the question of how to sustainably grow algae has proved troublesome to solve. Typically, algae for biofuel use is grown in reactors in order to control growth rate. But the ideal reactor design has yet to be developed, and in fact, some current designs use more energy than the algae yield produces.5

Although algae biofuel faces technological obstacles and dwindling government interest, many scientists today still see algae as a viable and crucial solution for future energy sustainability. UC San Diego houses the California Center for Algal Biotechnology, and Dr. Stephen Mayfield, a molecular biologist at the center, has worked with algae for over 30 years. In this time he has helped start four companies, including Sapphire Energy, founded in 2007, which focuses on developing algae biofuels. After receiving $100 million from venture capitalists in 2009, Sapphire Energy built a 70,000-square-foot lab in San Diego and a 220-acre farm in New Mexico. They successfully powered cars and jets with algae biofuel, drawing attention and $600 million in further funding from ExxonMobil. Although diminished interest then stalled production, algal researchers today believe people will come to understand the potential of using algae.2 The Mayfield Lab currently works on developing genetic and molecular tools to make algae fuel a viable means of energy production.4 They grow algae, extract its lipids, and convert them to gasoline, jet, and diesel fuel. Mayfield believes his lab will reach a low price of 80 or 85 dollars per barrel as they continue researching with large-scale biofuel production.1

The advantage of growing algae for energy production lies not only in its renewability and carbon neutrality, but also its potential for other uses. In addition to just growing on wastewater, algae can treat the water by removing nitrates.5 Algae farms could also provide a means of carbon sequestration. If placed near sources of industrial pollution, they could remove harmful CO2 emissions from the atmosphere through photosynthesis.4 Additionally, algae by-products are high in protein and could serve as fish and animal feed.5

At this time of increased energy demand and dwindling fossil fuel reserves, climate change concerns caused by increased atmospheric carbon, and an interest in U.S. energy independence, we need economically viable but also renewable, carbon neutral energy sources.4 Algae holds the potential to address these needs. Its rapid growth and photosynthetic ability mean its use as biofuel will be a sustainable process that does not increase net atmospheric carbon. The auxiliary benefits of using algae, such as wastewater treatment and carbon sequestration, increase the economic feasibility of adapting algae biofuel. While technological barriers must be overcome before algae biofuel can be implemented on a large scale, demographic and environmental conditions today indicate that continued research will be a smart investment for future sustainability.


  1. Deaver, Benjamin. Is Algae Our Last Chance to Fuel the World? Inside Science, Sep. 8, 2016.
  2. Dineen, Jessica. How Scientists Are Engineering Algae To Fuel Your Car and Cure Cancer. Forbes UCVoice, Mar. 30, 2015.
  3. Top 10 Sources for Biofuel. Seeker, Jan. 19, 2015.
  4. California Center for Algae Biotechnology. (accessed Oct. 16, 2016).
  5. Is Algae the Next Sustainable Biofuel? Forbes StatoilVoice, Feb. 27, 2015. (republished from Dec. 2013)


Detection of Gut Inflammation and Tumors Using Photoacoustic Imaging


Detection of Gut Inflammation and Tumors Using Photoacoustic Imaging


Photoacoustic imaging is a technique in which contrast agents absorb photon energy and emit signals that can be analyzed by ultrasound transducers. This method allows for unprecedented depth imaging that can provide a non-invasive alternative to current diagnostic tools used to detect internal tissue inflammation.1 The Rice iGEM team strove to use photoacoustic technology and biomarkers to develop a noninvasive method of locally detecting gut inflammation and colon cancer. As a first step, we genetically engineered Escherichia coli to express near-infrared fluorescent proteins iRFP670 and iRFP713 and conducted tests using biomarkers to determine whether expression was confined to a singular local area.


In photoacoustic imaging, laser pulses of a specific, predetermined wavelength (the excitation wavelength) activate and thermally excite a contrast agent such as a pigment or protein. The heat makes the contrast agent contract and expand producing an ultrasonic emission wavelength longer than the excitation wavelength used. The emission wavelength data are used to produce 2D or 3D images of tissues that have high resolution and contrast.2

The objective of this photoacoustic imaging project is to engineer bacteria to produce contrast agents in the presence of biomarkers specific to gut inflammation and colon cancer and ultimately to deliver the bacteria into the intestines. The bacteria will produce the contrast agents in response to certain biomarkers and lasers will excite the contrast agents, which will emit signals in local, targeted areas, allowing for a non-invasive imaging method. Our goal is to develop a non-invasive photoacoustic imaging delivery method that uses engineered bacteria to report gut inflammation and identify colon cancer. To achieve this, we constructed plasmids that have a nitric-oxide-sensing promoter (soxR/S) or a hypoxia-sensing promoter (narK or fdhf) fused to genes encoding near-infrared fluorescent proteins or violacein with emission wavelengths of 670 nm (iRFP670) and 713 nm (iRFP713). Nitric oxide and hypoxia, biological markers of gut inflammation in both mice and humans, would therefore promote expression of the desired iRFPs or violacein.3,4

Results and Discussion


To test the inducibility and detectability of our iRFPs, we used pBAD, a promoter that is part of the arabinose operon located in E. coli.5 We formed genetic circuits consisting of the pBAD expression system and iRFP670 and iRFP713 (Fig. 1a). AraC, a constitutively produced transcription regulator, changes form in the presence of arabinose sugar, allowing for the activation of the pBAD promoter.

CT Figure 1b.jpg

Fluorescence levels emitted by the iRFPs increased significantly when placed in wells containing increasing concentrations of arabinose (Figure 2). This correlation suggests that our selected iRFPs fluoresce sufficiently when promoters are induced by environmental signals. The results of the arabinose assays showed that we successfully produced iRFPs; the next steps were to engineer bacteria to produce the same iRFPs under nitric oxide and hypoxia.

Nitric Oxide

The next step was to test the nitric oxide induction of iRFP fluorescence. We used a genetic circuit consisting of a constitutive promoter and the soxR gene, which in turn expresses the SoxR protein (Figure 1b). In the presence of nitric oxide, SoxR changes form to activate the promoter soxS, which activates the expression of the desired gene. The source of nitric oxide added to our engineered bacteria samples was diethylenetriamine/nitric oxide adduct (DETA/NO).

Figure 3 shows no significant difference of fluorescence/OD600 between DETA/NO concentrations. This finding implies that our engineered bacteria were unable to detect the nitric oxide biomarker and produce iRFP; future troubleshooting includes verifying promoter strength and correct sample conditions. Furthermore, nitric oxide has an extremely short half-life of a few seconds, which may not be enough time for most of the engineered bacteria to sense the nitric oxide, limiting iRFP production and fluorescence.

CT Figure 1c.jpg


We also tested the induction of iRFP fluorescence with the hypoxia-inducible promoters narK and fdhf. We expected iRFP production and fluorescence to increase when using the narK and fdhf promoters in anaerobic conditions (Figure 1c and d).

However, we observed the opposite result. A decreased fluorescence for both iRFP constructs in both promoters was measured when exposed to hypoxia (Figure 4). This finding suggests that our engineered bacteria were unable to detect the hypoxia biomarker and produce iRFP; future troubleshooting includes verifying promoter strength and correct sample conditions.

Future Directions

Further studies include testing the engineered bacteria co-cultured with colon cancer cells and developing other constructs that will enable bacteria to sense carcinogenic tumors and make them fluoresce for imaging and treatment purposes.

Violacein has anti-cancer therapy potential

Violacein is a fluorescent pigment for in vivo photoacoustic imaging in the near-infrared range and shows anti-tumoral activity6. It has high potential for future work in bacterial tumor targeting. We have succeeded in constructing violacein using Golden Gate shuffling7 and intend to use it in experiments such as the nitric oxide and hypoxia assays we used for iRFP670 and 713.

Invasin can allow for targeted cell therapy

Using a beta integrin called invasin, certain bacteria are able to invade mammalian cells.8-9 If we engineer E. coli that have the beta integrin invasion as well as the genetic circuits capable of sensing nitric oxide and/or hypoxia, we can potentially allow the E. coli to invade colon cells and release contrast agents for photoacoustic imaging or therapeutic agents such as violacein only in the presence of specific biomarkers.10 Additionally, if we engineer the bacteria that exhibit invasin to invade colon cancer cells only and not normal cells, then this approach would potentially allow for a localized targeting and treatment of cancerous tumors. This design allows us to create scenarios with parameters more similar to the conditions observed in the human gut as we will be unable to test our engineered bacteria in an actual human gut.


The International Genetically Engineered Machine (iGEM) Foundation ( is an independent, non-profit organization dedicated to education and competition, the advancement of synthetic biology, and the development of an open community and collaboration.

This project would not have been possible without the patient instruction and generous encouragement of our Principal Investigators (Dr. Beth Beason-Abmayr and Dr. Jonathan Silberg, BioSciences at Rice), our graduate student advisors and our undergraduate team. I would also like to thank our iGEM collaborators.

This work was supported by the Wiess School of Natural Sciences and the George R. Brown School of Engineering and the Departments of BioSciences, Bioengineering, and Chemical and Biomolecular Engineering at Rice University; Dr. Rebecca Richards-Kortum, HHMI Pre-College and Undergraduate Science Education Program Grant #52008107; and Dr. George N. Phillips, Jr., Looney Endowment Fund.

If you would like to know more information about our project and our team, please visit our iGEM wiki at


  1. Ntziachristos, V. Nat Methods. 2010, 7, 603-614.
  2. Weber, J. et al. Nat Methods. 2016, 13, 639-650.
  3. Archer, E. J. et al. ACS Synth. Biol. 2012, 1, 451–457.
  4. Hӧckel, M.; Vaupel, P. JNCI J Natl Cancer Inst. 2001, 93, 266−276.
  5. Guzman, L. M. et al. J of Bacteriology. 1995, 177, 4121-4130.
  6. Shcherbakova, D. M.; Verkhusha, V. V. Nat Methods. 2013, 10, 751-754.
  7. Engler, C. et al. PLOS One. 2009, 4, 1-9.
  8. Anderson, J. et al. Sci Direct. 2006, 355, 619–627
  9. Arao, S. et al. Pancreas. 2000, 20, 619-627.
  10. Jiang, Y. et al. Sci Rep. 2015, 19, 1-9.


A Fourth Neutrino? Explaining the Anomalies of Particle Physics


A Fourth Neutrino? Explaining the Anomalies of Particle Physics


The very first neutrino experiments discovered that neutrinos exist in three flavors and can oscillate between those flavors as they travel through space. However, many recent experiments have collected anomalous data that contradicts a three neutrino flavor hypothesis, suggesting instead that there may exist a fourth neutrino, called the sterile neutrino, that interacts solely through the gravitational force. While there is no conclusive evidence proving the existence of a fourth neutrino flavor, scientists designed the IceCube laboratory at the South Pole to search for this newly hypothesized particle. Due to its immense size and sensitivity, the IceCube laboratory stands as the most capable neutrino laboratory to corroborate the existence of these particles.


Neutrinos are subatomic, ubiquitous, elementary particles that are produced in a variety of ways. Some are produced from collisions in the atmosphere between different particles, while others result from the decomposition and decay of larger atoms.1,3 Neutrinos are thought to play a role in the interactions between matter and antimatter; furthermore, they are thought to have significantly influenced the formation of the universe.3 Thus, neutrinos are of paramount concern in the world of particle physics, with the potential of expanding our understanding of the universe. When they were first posited, neutrinos were thought to have no mass because they have very little impact on the matter around them. However, decades later, it was determined that they have mass but only interact with other matter in the universe through the weak nuclear force and gravity.2

Early neutrino experiments found that measuring the number of neutrinos produced from the sun resulted in a value almost one third of the predicted value. Coupled with other neutrino experiments, these observations gave rise to the notion of neutrino flavors and neutrino flavor oscillations. There are three flavors of the standard neutrino: electron (ve), muon (vμ), and tauon (v𝜏). Each neutrino is a decay product that is produced with its namesake particle; for example, ve is produced alongside an electron during the decay process.9 Neutrino oscillations were also proposed after these results, stating that if a given type of neutrino is produced during decay, then at a certain distance from that spot, the chance of observing that neutrino with the properties of a different flavor becomes non-zero.2 Essentially, if ve is produced, then at a sufficient distance, the neutrino may become either vμ or v𝜏. This is caused by a discrepancy in the flavor and mass eigenstates of neutrinos.

In addition to these neutrino flavor states, there are also three mass eigenstates, or states in which neutrinos have definite mass. Through experimental evidence, these two different states represent two properties of neutrinos. As a result, neutrinos of the same flavor can be of different masses. For example, two electron neutrinos will have the same definite flavor, but not necessarily the same definite mass state. It is this discrepancy in the masses of these particles that actually leads to their ability to oscillate between flavors with the probability function given by the formula P(ab) = sin2(2q)sin2(1.27Dm2LvEv-1), where a and b are two flavors, q is the mixing angle, Dm is the difference in the mass eigenstate values of the two different neutrino flavors, L is the distance from source to detector, and E is the energy of the neutrino.6 Thus, each flavor is a different linear combination of the three states of definite mass.

The equation introduces the important concept of the mixing angle, which defines the difference between flavor and mass states and accounts for neutrino flavor oscillations. Thus, if the mixing angle were zero, this would imply that the mass states and and flavor states were the same and therefore no oscillations could occur. For example, all muon neutrinos produced at a source would still be muon neutrinos when P(mb) = 0. On the other hand, at a mixing angle of π/4, when P(mb) = 1, all muon neutrinos would oscillate to the other flavors in the probability function.9

Anomalous Data

Some experimental data has countered the notion of three neutrino flavor oscillations.3 If the experimental interpretation is correct, it would point to the existence of a fourth or even an additional fifth mass state, opening up the possibility of other mass states that can be taken by the hypothesised sterile neutrino. The most conclusive anomalous data arises from the Liquid Scintillator Neutrino Detector (LSND) Collaboration and MiniBooNE. The LSND Collaboration at Los Alamos National Laboratory looked for oscillations between vm neutrinos produced from muon decay and ve neutrinos. The results showed a lower-than-expected probability of oscillation.6 These results highly suggest either an oscillation to another neutrino flavor. A subsequent experiment at Fermilab called the mini Booster Neutrino Experiment (MiniBooNE) again saw a discrepancy between predicted and observed values of ve appearance with an excess of ve events.7 All of these results have a low probability of fit when compared to the standard model of particle physics, which gives more plausibility to the hypothesis of the existence of more than three neutrino flavors.

GALLEX, an experiment measuring neutrino emissions from the sun and chromium-51 neutrino sources, as well as reactor neutrino experiments gave inconsistent data that did not coincide with the standard model’s predictions for neutrinos. This evidence merely suggests the presence of these new particles, but does not provide conclusive evidence for their existence.4,5 Thus, scientists designed a new project at the South Pole to search specifically for newly hypothesized sterile neutrinos.

IceCube Studies

IceCube, a particle physics laboratory, was designed specifically for collecting data concerning sterile neutrinos. In order to collect conclusive data about the neutrinos, IceCube’s vast resources and acute precision allow it to detect and register a large number of trials quickly. Neutrinos that come into contact with IceCube’s detectors are upgoing atmospheric neutrinos and thus have already traversed the Earth. This allows a fraction of the neutrinos to pass through the Earth’s core. If sterile neutrinos exist, then the large gravitational force of the Earth’s core should cause some muon neutrinos that traverse it to oscillate into sterile neutrinos, resulting in fewer muon neutrinos detected than expected in a model containing only three standard mass states, and confirming the existence of a fourth flavor.3

For these particles that pass upward through IceCube’s detectors, the Earth filters out the charged subatomic particle background noise, allowing only the detection of muons (the particles of interest) from neutrino interactions. The small fraction of upgoing atmospheric neutrinos that enter the ice surrounding the detector site will undergo reactions with the bedrock and ice to produce muons. These newly created muons then traverse the ice and react again to produce Cherenkov light, a type of electromagnetic radiation, that is finally able to be detected by the Digital Optical Modules (DOMs) of IceCube. This radiation is produced when a particle having mass passes through a substance faster than light can pass through that same substance.8

In 2011-2012, a study using data from the full range of DOMs, rather than just a portion, was conducted.8 This data, along with other previous data, were examined in order to search for conclusive evidence of sterile neutrino oscillations in samples of atmospheric neutrinos. Experimental data were compared to a Monte Carlo simulation. For each hypothesis of the makeup of the sterile neutrino, the Poissonian log likelihood, a probability function that finds the best correlation of experimental data to a hypothetical model, was calculated. Based on the results shown in Figure 2, no evidence points towards sterile neutrinos.8


Other studies have also been conducted at IceCube, and have also found no indication of sterile neutrinos. Although there is strong evidence against the existence of sterile neutrinos, this does not completely rule out their existence. These experiments have focused only on certain mixing angles and may have different results for different mixing angles. Also, if sterile neutrinos are conclusively found to be nonexistent by IceCube, there is still the question of why the anomalous data appeared at LSND and MiniBooNE. Thus, IceCube will continue sterile neutrino experiments at variable mixing angles to search for an explanation to the anomalies observed in the previous neutrino experiments.


  1. Fukuda, Y. et al. Evidence for Oscillation of Atmospheric Neutrinos. Phys. Rev. Lett. 1998, 81, 1562.
  2. Beringer, J. et al. Review of Particle Physics. Phys. Rev. D. 2012, 86, 010001.
  3. Schmitz, D. W. Viewpoint: Hunting the Sterile Neutrino. Physics. [Online] 2016, 9, 94.
  4. Hampel, W. et al. Final Results of the 51Cr Neutrino Source Experiments in GALLEX. Phys. Rev. B. 1998, 420, 114.
  5. Mention, G. et al. Reactor Antineutrino Anomaly. Phys. Rev. D. 2011, 83, 073006.
  6. Aguilar-Arevalo, A. A. et al. Evidence for Neutrino Oscillations for the Observation of ve Appearance in a vμ Beam. Phys. Rev. D. 2001, 64, 122007.
  7. Aguilar-Arevalo, A. A. et al. Phys. Rev. Lett. 2013, 110, 161801.
  8. Aartsen, M. G. et al. Searches for Sterile Neutrinos with the IceCube Detector. Phys. Rev. Lett. 2016, 117, 071801.



Fire the Lasers


Fire the Lasers

Imagine a giant solar harvester flying in geosynchronous orbit, which using solar energy, beams radiation to a single point 36, 000 km away. It would look like a space weapon straight out of Star Wars. Surprisingly, this concept might be the next so-called “moonshot” project that humanity needs to move forward. In space-based solar power generation, a solar harvester in space like the one discussed above would generate DC current from solar radiation using photovoltaic cells, and then convert it into microwaves. These microwaves would then be beamed to a rectifying antenna (or a rectenna) on the ground, which would convert them back into direct current (DC). Finally, a converter would change the DC energy to AC to be supplied into the grid.1

With ever-increasing global energy consumption and rising concerns of climate change due to the burning of fossil fuels, there has been increasing interest in alternative energy sources. Although renewable energy technology is improving every year, its current energy capacity is not enough to obviate the need for fossil fuels. Currently, wind and solar sources have capacity factors (a ratio of an energy source’s actual output over a period of time to its potential output) of around 34 and 26 percent, respectively. In comparison, nuclear and coal sources have capacity factors of 90 and 70 percent, respectively.2 Generation of energy using space solar power satellites (SSPSs) could pave the path humanity needs to move towards a cleaner future. Unlike traditional solar power, which relies on favorable weather conditions, SSPSs would allow continuous, green energy generation.

Although space-based solar power (SBSP) might sound pioneering, scientists have been flirting with the idea since Dr. Peter Glaser introduced the concept in 1968. Essentially, SBSP systems can be characterized by three elements: a large solar collector in geostationary orbit fitted with reflective mirrors, wireless transmission via microwave or laser, and a receiving station on Earth armed with rectennas.3 Such an implementation would require complete proficiency in reliable space transportation, efficient power generation and capture, practical wireless transmission of power, economical satellite design, and precise satellite-antenna calibration systems. Collectively, these goals might seem insurmountable, but taken separately, they are actually feasible. Using the principles of optics, scientists are optimizing space station design to maximize energy collection.4 There have been advancements in rectennas that allow the capture of even weak, ambient microwaves.5 With the pace of advancement speeding up every year, it’s easy to feel like the future of renewable energy is rapidly approaching. However, these advancements will be limited to literature if there are no global movements to utilize SBSP.

Japan Aerospace Exploration Agency (JAXA) has taken the lead in translating SBSP from the page to the launch pad. Due to its lack of fossil fuel resources and the 2011 incident at the Fukushima Daiichi nuclear plant, Japan, in desperate need of alternative energy sources, has proposed a 25-year technological roadmap to the development of a one-gigawatt SSPS station. To accomplish this incredible feat, Japan plans on deploying a 10,000 metric ton solar collector that would reside in geostationary orbit around Earth.6 Surprisingly, the difficult aspect is not building and launching the giant solar collector; it’s the technical challenge of transmitting the energy back to earth both accurately and efficiently. This is where JAXA has focused its research.

Historically, wireless power transmission has been accomplished via laser or microwave transmissions. Laser and microwave radiation are similar in many ways, but when it comes down to which one to use for SBSP, microwaves are a clear winner. Microwaves have longer wavelengths (usually lying between five and ten centimeters) than those of lasers (which often are around one micrometer), and are thus better able to penetrate Earth’s atmosphere.7 Accordingly, JAXA has focused on optimizing powerful and accurate microwave generation. JAXA has developed kW-class high-power microwave power transmission using phased, synchronized, power-transmitting antenna panels. Due to current limitations on communication technologies, JAXA has also developed advanced retrodirective systems, which allow high-accuracy beam pointing.8 In 2015, JAXA was able to deliver 1.8 kilowatts accurately to a rectenna 55 meters away which, according to JAXA, is the first time that so much power has been transmitted with any appreciable precision . Although this may seem insignificant compared to the 36,000 km transmissions required for a satellite in geosynchronous orbit, this is huge achievement for mankind. It demonstrates that large scale wireless transmission is a realistic option to power electric cars, transmission towers, and even satellites. JAXA,continuing on its roadmap, plans to conduct the first microwave power transmission in space by 2018.

Although the challenges ahead for space based solar power generation are enormous in both economic and technical terms, the results could be revolutionary. In a manner similar to the introduction of coal and oil, practical SBSP systems would completely alter human civilization. With continuous green energy generation, SBSP systems could solve our energy conflicts and allow progression to next phase of civilization. If everything goes well, air pollution and oil spills may merely be bygones.


  1. Sasaki, S. IEEE Spec. 2014, 51, 46-51.
  2. EIA (U.S. Energy Information Administration). (accessed     Oct. 29, 2016).
  3. Wolfgang, S. Acta Astro. 2004, 55, 389-399.
  4. Yang, Y. et al. Acta Astro. 2016, 121, 51-58.
  5. Wang, R. et al. IEEE Trans. Micro. Theo. Tech. 2014, 62, 1080-1089.
  6. Sasaki, S. Japan Demoes Wireless Power Transmission for Space-Based Solar Farms. IEEE Spectrum [Online], March 16, 2015. (accessed Oct. 29, 2016).
  7. Summerer, L. et al. Concepts for wireless energy transmission via laser. Europeans Space Agency (ESA)-Advanced Concepts Team [Online], 2009. (accessed Oct. 29, 2016).
  8. Japan Space Exploration Agency. Research on Microwave Wireless Power Transmission Technology. (accessed Oct. 29, 2016).



Engineering Eden: Terraforming a Second Earth


Engineering Eden: Terraforming a Second Earth

Today’s world is faced with thousands of complex problems that seem to be insurmountable. One of the most pressing is the issue of the environment and how our over-worked planet can sustain such an ever-growing society. Our major source of energy is finite and rapidly depleting. Carbon dioxide emissions have passed the “irreversibility” threshold. Our oceans and atmosphere are polluted, and scientists predict a grim future for Mother Earth if humans do not change our wasteful ways. A future similar to the scenes of “Interstellar” or “Wall-E” is becoming increasing less fictitious. While most of the science world is turning to alternative fuels and public activism as vehicles for change, some radical experts in climate change and astronomy suggest relocation to a different planet: Mars. The Mars rover, Curiosity, presents evidence that Mars has the building blocks of a potential human colony, such as the presence of heavy metals and nutrients nestled in its iconic red surface. This planet, similar in location, temperature, and size to Earth, seems to have the groundwork to be our next home. Now is when we ponder: perhaps our Earth was not meant to sustain human life for eternity. Perhaps we are living at the tail end of our time on Earth.

Colonizing Mars would be a project beyond any in human history, and the rate-limiting step of this process would be developing an atmosphere that could sustain human, animal, and plant life. The future of mankind on Mars is contingent on developing a breathable atmosphere, so humans and animals could thrive without the assistance of oxygen tanks, and vegetation could grow without the assistance of a greenhouse. The Martian atmosphere has little oxygen, being almost 95.7 percent carbon dioxide. It is also one percent of the density of Earth’s atmosphere, so it provides no protection from the Sun’s radiation. Our atmosphere, armed with a thick layer of ozone, absorbs or deflects the majority of radiation before it hits our surface. Even if a human could breathe on the surface of Mars, he or she would die from radiation poisoning or cancer. Fascinating ways to address this have been discussed, one being mass hydrogen bombing across the entire surface of the planet, creating an atmosphere of dust and debris thick enough to block ultraviolet radiation. This feat can also be accomplished by physically harnessing nearby asteroids and catapulting them into the surface. The final popular idea is the use of mega-mirrors to capture the energy of the sun to warm up the surface to release greenhouse gases from deep within the soil.1

However, bioengineers have suggested another way of colonizing Mars--a way that does not require factories or asteroids or even human action for that matter. Instead, we would use genetically modified plants and algae to build the Martian atmosphere. The Defense Advanced Research Projects Agency (DARPA) is pursuing research in developing these completely new life forms.2 These life forms would not need oxygen or water to survive, but instead would synthesize a new atmosphere given the materials already on Mars. The bioengineering lab at DARPA has developed a software called DTA GView which has been called a “Google Maps of Genomes.” It acts as a library of genes, and DARPA has identified genes that could be inserted into extremophile organisms. A bacteria called Chroococcidiopsis is resistant to wide temperature changes and hypersalinity, two conditions found on Mars.3 Carnobacterium spp has proven to thrive under low pressure and in the absence of oxygen. These two organisms could potentially be genetically engineered to live on Mars and add vital life-sustaining molecules to the atmosphere.

Other scientific developments must occur before these organisms are ready to pioneer the human future on Mars. Curiosity must send Earth more data regarding what materials are present in Mars’ soil, and we must study how to choose, build, and transport the ideal candidate to Mars. Plus, many argue that our scientific research should be focused on healing our current home instead of building a new one. If we are willing to invest the immense scientific capital required to terraform another planet, we would likely also be able to mediate the problem of Earthly pollution. However, in such a challenging time, we must venture to new frontiers, and the bioengineers at DARPA have given us an alternative method to go where no man or woman has ever gone before.


  1. “The ethics of terraforming Mars: a review” iGem Valencia Team, 2010, 1-12 (Accessed November 2, 2016)
  2. Terraforming Mars With Microbes (Accessed November 4, 2016)
  3. We are Engineering the Organisms that will terraform Mars. (Accessed November 4, 2016)


Who Says Time Travel Isn't Possible?


Who Says Time Travel Isn't Possible?

There are several very real challenges that must be overcome when attempting to travel to another star, let alone another galaxy. However, with today’s technology and understanding of physics, we can envision potential ways to make interstellar travel, and even time travel, a reality, giving rise to the question of why other potential civilizations have not invented and made use of it yet. This is especially suspicious because considering the immensity of the universe, there are bound to be other intelligent civilizations, begging the famous question of the Fermi Paradox, “So where is everybody?” It’s answer would enable us to evolve into an interstellar or intergalactic species, while failing to do so could spell our demise.

Einstein’s theory of special relativity is where the cosmic speed limit (the speed of light) was first introduced. His theory also gives rise to the concept of time dilation, which states that time runs slower for those traveling extremely fast than it does for those on Earth, and that distances shrink when travelling at high speeds.1 So, when a spaceship is travelling close to the speed of light, time measured aboard runs slower than it would on clocks at rest. This can play an important role in interstellar travel, because it can allow travelers moving close to the cosmic speed limit to age slower than those on Earth. For example, if a spaceship left Earth in the year 2100 and made a roundtrip to the star Vega at 90% the speed of light, it would return in Earth year 2156, but only 24 years would have passed for the crew of the ship.2 Because of time dilation, journeys could be made to very distant places, and the crew would age very little. Due to this amazing effect, one could theoretically travel to the black hole at the center of the Milky Way Galaxy, 28,000 light years away, and only age 21 years, if travelling fast enough.2 At a high enough percentage of the speed of light, you would be able to reach Andromeda (2.5 million light years away), and return to Earth only 60 years older while 5 million years have passed on Earth.2 Clearly, time dilation is a real form of time travel to the future, assuming relativistic speeds are achievable. Therefore, it follows that the main obstacle for this method is reaching a percentage of the speed of light where time dilation becomes significant, requiring enormous amounts of energy.

Though obtaining the energy required for interstellar travel may seem like a far off goal, it will definitely be possible to travel great distances at great speeds in the near future with the science and technology that we have today. The first of these technologies are rockets that use nuclear energy as power. According to Albert Einstein’s equation, E=mc2, any small amount of mass can release a very large amount of energy. In fact, through the use of nuclear fission, only 0.6 grams of Uranium (less than the weight of an M&M) was sufficient to level Hiroshima during World War II.3 Nuclear fission makes use of the lost mass when an atomic nucleus splits into two. Nuclear fusion, on the other hand, involves two atomic nuclei fusing into one, releasing ten times the energy of fission. This process is the source of energy for the sun, occurring at its core. Nuclear fusion, if controlled, is a viable source of energy in the future. Just using the hydrogen present in the water coming out of one faucet could provide enough energy for the United States’ current needs, a staggering 2,850,000,000,000 joules/second.2

In order to conduct nuclear fusion, an environment similar to the center of the sun must be created, with the same temperatures and pressures. Gas must be converted into an highly ionized state known as plasma. Recently, MIT was able to use their Alcator C-Mod tokamak reactor’s extreme magnetic fields to create highest plasma pressures ever recorded4 In addition, a 7-story reactor in southern France, 800 times larger than MIT’s reactor, is set to be completed in 2025 and will have magnets that are each as heavy as a Boeing 747.5 Nuclear fusion could one day provide limitless energy for a spacecraft, accelerating it to relativistic speeds. Enormous scoops could be attached to the spacecraft to collect interstellar hydrogen gas throughout the journey, allowing travel to the distant corners of the galaxy.

Another technological development to facilitate interstellar travel is the EM drive, which is purported to be an electromagnetic thruster. This form of propulsion is highly controversial because it seemingly violates Newton’s 3rd law and thus the Law of Conservation of Momentum, which together say that for something to move in one direction, an equal and opposite force must be exerted in the opposite direction. The EM drive is thought to use electromagnetic waves as fuel and create thrust through microwaves within the engine cavity that push on the inside and cause the thruster to accelerate in the opposite direction.6 Simply put, the EM drive is able to go in one direction without a propellant or an opposite force pushing it. It has been tested multiple times, most notably by NASA’s Eagleworks Lab. The EM drive has repeatedly been measured to produce a small amount of thrust, making it difficult for scientists to dismiss the possibility that it works.7 The thrust seemingly cannot be explained by our current understanding of physics, but the Eagleworks Lab has nevertheless submitted its results to be published soon in the American Institute of Aeronautics and Astronautics’ Journal of Propulsion and Power. This Eagleworks experiment, if shown to be reproducible, would open up opportunities for researchers around the world to conduct further experimentation. In August, plans were announced to test the EM drive in space, which would be the most robust test of its efficacy to date. The EM drive could one day provide an essentially limitless supply of thrust on a spacecraft without the need for propellant, allowing it to constantly accelerate until it reached relativistic speeds.

These are only two examples of technologies that could make interstellar travel possible. In the next few decades, we can look forward to more innovative research that will push the boundaries of science and redefine interplanetary, interstellar, and intergalactic travel. If relativistic speeds are achieved, humans could travel thousands, if not millions of years into the future by aging much slower than the rate at which time would actually pass on Earth. So who says we can’t time travel? Certainly not science!


  1. Bennet, J. The Cosmic Perspective; Pearson: Boston, 2014.
  2. Bennett, J. Life In the Universe; Pearson: San Francisco, 2012.
  3. Glasstone, S. Dolan, P. The Effects of Nuclear Weapons, 3rd; United States Department of Defense and United States Department of Energy: 1977.
  4. Plasma Science and Fusion Center. (accessed Nov. 11, 2016).
  5. ITER. (accessed Nov. 05, 2016).
  6. Shawyer, R. New Scientist [Online] 2006, (accessed Nov. 10, 2016).
  7. Wang, B. NASA Emdrive experiments have force measurements while the device is in a hard vacuum. NextBigFuture, Feb. 07, 2015. (accessed Nov 7, 2016).





Graphene Nanoribbons and Spinal Cord Repair


Graphene Nanoribbons and Spinal Cord Repair

The same technology that has been used to strengthen polymers1, de-ice helicopter wings2, and create more efficient batteries3 may one day help those with damaged or even severed spinal cords walk again. The Tour Lab at Rice University, headed by Dr. James Tour, is harnessing the power of graphene nanoribbons to create a special new material called Texas-PEG that may revolutionize the way we treat spinal cord injuries; one day, it may even make whole body transplants a reality.

Dr. Tour, the T.T. and W.F. Chao Professor of Chemistry, Professor of Materials Science and NanoEngineering, and Professor of Computer Science at Rice University, is a synthetic organic chemist who mainly focuses on nanotechnology. He currently holds over 120 patents and has published over 600 papers, and was inducted into the National Academy of Inventors in 2015.4 His lab is currently working on several different projects, such as investigating various applications of graphene, creating and testing nanomachines, and the synthesizing and imaging of nanocars. The Tour Lab first discovered graphene nanoribbons while working with graphene back in 2009.5 Their team found a way to “unzip” graphene nanotubes into smaller strips called graphene nanoribbons by injecting sodium and potassium atoms between nanotube layers in a nanotube stack until the tube split open. “We fell upon the graphene nanoribbons,” says Dr. Tour. “I had seen it a few years ago in my lab but I didn’t believe it could be done because there wasn’t enough evidence. When I realized what we had, I knew it was enormous.”

This discovery was monumental: graphene nanoribbons have been used in a variety of different applications because of their novel characteristics. Less than 50 nm wide ( which is about the width of a virus), graphene nanoribbons are 200 times stronger than steel and are great conductors of heat and electricity. They can be used to make materials significantly stronger or electrically conductive without adding much additional weight. It wasn’t until many years after their initial discovery, however, that the lab discovered that graphene nanoribbons could be used to heal severed spinal cords.

The idea began after one of Dr. Tour’s students read about European research on head and whole body transplants on Reddit. This research was focused on taking a brain dead patient with a healthy body and pairing them with someone who has brain activity but has lost bodily function. The biggest challenge, however, was melding the spine together. The neurons in the two separated parts of the spinal cord could not communicate with one another, and as a result, the animals involved with whole body and head transplant experiments only regained about 10% of their original motor function. The post-graduate student contacted the European researchers, who then proposed using the Tour lab’s graphene nanoribbons in their research, as Dr. Tour’s team had already proven that neurons grew very well along graphene.

“When a spinal cord is severed, the neurons grow from the bottom up and the top down, but they pass like ships in the night; they never connect. But if they connect, they will be fused together and start working again. So the idea was to put very thin nanoribbons in the gap between the two parts of the spinal cord to get them to align,” explains Dr. Tour. Nanoribbons are extremely conductive, so when their edges are activated with polyethylene glycol, or PEG, they form an active network that allows the spinal cord to reconnect. This material is called Texas-PEG, and although it is only about 1% graphene nanoribbons, this is still enough to create an electric network through which the neurons in the spinal cord can connect and communicate with one another.

The Tour lab tested this material on rats by severing their spinal cords and then using Texas-PEG to see how much of their mobility was recovered. The rats scored about 19/21 on a mobility scale after only 3 weeks, a remarkable advancement from the 10% recovery in previous European trials. “It was just phenomenal. There were rats running away after 3 weeks with a totally severed spinal cord! We knew immediately that something was happening because one day they would touch their foot and their brain was detecting it,” says Dr. Tour. The first human trials will begin in 2017 overseas. Due to FDA regulations, it may be awhile before we see trials in the United States, but the FDA will accept data from successful trials in other countries. Graphene nanoribbons may one day become a viable treatment option for spinal injuries.

This isn’t the end of Dr. Tour’s research with graphene nanoribbons. “We’ve combined our research with neurons and graphene nanoribbons with antioxidants: we inject antioxidants into the bloodstream to minimize swelling. All of this is being tested in Korea on animals. We will decide on an optimal formulation this year, and it will be tried on a human this year,” Dr. Tour explained. Most of all, Dr. Tour and his lab would like to see their research with graphene nanoribbons used in the United States to help quadriplegics who suffer from limited mobility due to spinal cord damage. What began as a lucky discovery now has the potential to change the lives of thousands.


  1. Wijeratne, Sithara S., et al. Sci. Rep. 2016, 6.
  2. Raji, Abdul-Rahman O., et al. ACS Appl. Mater. Interfaces. 2016, 8 (5), 3551-3556.
  3. Salvatierra, Rodrigo V., et al. Adv. Energy Mater. 2016, 6 (24).
  4. National Academy of Inventors. (accessed Feb. 1, 2017).
  5. Zehtab Yazdi, Alireza, et al. ACS Nano. 2015, 9 (6), 5833-5845.


Haptics: Touching Lives


Haptics: Touching Lives

Everyday you use a device that has haptic feedback: your phone. Every little buzz for notifications, key presses, and failed unlocks are all examples of haptic feedback. Haptics is essentially tactile feedback, a form of physical feedback that uses vibrations. It is a field undergoing massive development and applications of haptic technology are expanding rapidly. Some of the up-and-coming uses for haptics include navigational cues while driving, video games, virtual reality, robotics, and, as in Dr. O’Malley’s case, in the medical field with prostheses and medical training tools.

Dr. Marcia O’Malley has been involved in the biomedical field ever since working in an artificial knee implant research lab as an undergraduate at Purdue University. While in graduate school at Vanderbilt University, she worked in a lab focused on human-robot interfaces where she spent her time designing haptic feedback devices. Dr. O’Malley currently runs the Mechatronics and Haptic Interfaces (MAHI) Lab at Rice University, and she was recently awarded a million dollar National Robotics Initiative grant for one of her projects. The MAHI Lab “focuses on the design, manufacture, and evaluation of mechatronic or robotic systems to model, rehabilitate, enhance or augment the human sensorimotor control system.”1 Her current research is focused on prosthetics and rehabilitation with an effort to include haptic feedback. She is currently working on the MAHI EXO- II. “It’s a force feedback exoskeleton, so it can provide forces, it can move your limb, or it can work with you,” she said. The primary project involving this exoskeleton is focused on “using electrical activity from the brain captured with EEG… and looking for certain patterns of activation of different areas of the brain as a trigger to move the robot.” In other words, Dr. O’Malley is attempting to enable exoskeleton users to control the device through brain activity.

Dr. O’Malley is also conducting another project, utilizing the National Robotics Initiative grant, to develop a haptic cueing system to aid medical students training for endovascular surgeries. The idea for this haptic cueing system came from two different sources. The first part was her prior research which consisted of working with joysticks. She worked on a project that involved using a joystick, incorporated with force feedback, to swing a ball to hit targets.2 As a result of this research, Dr. O’Malley found that “we could measure people’s performance, we could measure how they used the joystick, how they manipulated the ball, and just from different measures about the characteristics of the ball movement, we could determine whether you were an expert or a novice at the task… If we use quantitative measures that tell us about the quality of how they’re controlling the tools, those same measures correlate with the experience they have.” After talking to some surgeons, Dr. O’Malley found that these techniques of measuring movement could work well for training surgeons.

The second impetus for this research came from an annual conference about haptics and force feedback. At the conference she noticed that more and more people were moving towards wearable haptics, such as the Fitbit, which vibrates on your wrist. She also saw that everyone was using these vibrational cues to give directional information. However, “nobody was really using it as a feedback channel about performance,” she said. These realizations led to the idea of the vibrotactile feedback system.

Although the project is still in its infancy, the current anticipated product is a virtual reality simulator which will track the movements of the tool. According to Dr. O’Malley, the technology would provide feedback through a single vibrotactile disk worn on the upper limb. The disk would use a voice coil actuator that moves perpendicular to the wearer’s skin. Dr. O’Malley is currently working with Rice psychologist Dr. Michael Byrne to determine which frequency and amplitude to use for the actuator, as well as the timing of the feedback to avoid interrupting or distracting the user.

Ultimately, this project would measure the medical students’ smoothness and precision while using tools, as well as give feedback to the students regarding their performance. In the future, it could also be used in surgeries during which a doctor operates a robot and receives force feedback through similar haptics. During current endovascular surgery, a surgeon uses screens that project a 2D image of the tools in the patient. Incorporating 3D views would need further FDA approval and could distract and confuse surgeons given the number of screens they would have to monitor. This project would offer surgeons a simpler way to operate. From exoskeletons to medical training, there is a huge potential for haptic technologies. Dr. O’Malley is making this potential a reality.


  1. Mechatronics and Haptic Interfaces Lab Home Page. (accessed   Nov. 7, 2016).
  2. O’Malley, M. K. et al. J. Dyn. Sys., Meas., Control. 2005, 128 (1), 75-85.