Viewing entries tagged

Machine Minds: An Exploration of Artificial Neural Networks


Machine Minds: An Exploration of Artificial Neural Networks


An artificial neural network is a computational method that mirrors the way a biological nervous system processes information. Artificial neural networks are used in many different fields to process large sets of data, often providing useful analyses that allow for prediction and identification of new data. However, neural networks struggle with providing clear explanations regarding why certain outcomes occur. Despite these difficulties, neural networks are valuable data analysis tools applicable to a variety of fields. This paper will explore the general architecture, advantages and applications of neural networks.


Artificial neural networks attempt to mimic the functions of the human brain. Biological nervous systems are composed of building blocks called neurons. In a biological nervous system, biological neurons communicate with axons and dendrites. When a biological neuron receives a message, it sends an electric signal down its axon. If this electric signal is greater than a threshold value, the electrical signal is converted to a chemical signal that is sent to nearby biological neurons.2 Similarly, while artificial neural networks are dictated by formulas and data structures, they can be conceptualized as being composed of artificial neurons, which hold similar functions to their biological counterparts. When an artificial neuron receives data, if the change in the activation level of a receiving artificial neuron exceeds a defined threshold value, the artificial neuron creates an output signal that propagates to other connected artificial neurons.2 The human brain learns from past experiences and applies this information from the past in new settings. Similarly, artificial neural networks can adapt their behaviors until their responses are both accurate and consistent in new situations.1

While artificial neural networks are structurally similar to their biological counterparts, artificial neural networks are distinct in several ways. For example, certain artificial neural networks send signals only at fixed time intervals, unlike biological neural networks, in which neuronal activity is variable.3 Another major difference between biological neural networks and artificial neural networks is the time of response. For biological neural networks, there is often a latent period before a response, whereas in artificial neural networks, responses are immediate.3

Neural networks are useful in a wide-range of fields that involve large datasets, ranging from biological systems to economic analysis. These networks are practical in problems involving pattern recognition, such as predicting data trends.3 Neural networks are also effective when data is error-prone, such as in cognitive software like speech and image recognition.3

Neural Network Architecture:

One popular neural network design is the Multilayer Perceptrons (MLP) design. In the MLP design, each artificial neuron outputs a weighted sum of its inputs based on the strength of the synaptic connections.1 Artificial neuron synaptic strength is determined by the formulaic design of the neural network and is directly proportional to weight: stronger and more valuable artificial neurons have a larger weight and therefore are more influential in the weighted sum. The output of the neuron is based on whether the weighted sum is greater than the threshold value of the artificial neuron.1 The MLP design was originally composed of perceptrons. Perceptrons are artificial neurons that provide a binary output of zero or one. Perceptrons have limited use in a neural network model because small changes in the input can drastically alter the output value of the system. However, most current MLP systems use sigmoid neurons instead of perceptrons. Sigmoid neurons can take inputs and produce outputs of values between zero and one, allowing for more variation in the inputs because these changes do not radically alter the outcome of the model.4

In terms of the architecture of the MLP design, the network is a feedforward neural network.1 In a feedforward design, the units are arranged so signals travel exclusively from input to output. These networks are composed of a layer of input neurons, a layer of output neurons, and a series of hidden layers in between the input and output layers. These hidden layers are composed of internal neurons that further process the data within the system. The complexity of this model varies with the number of hidden layers and the number of inputs in each layer.1

In an MLP design, once the number of layers and the number of units in each layer are determined, the threshold values and the synaptic weights in the system need to be set using training algorithms so that the errors in the system are minimized.4 These training algorithms use a known dataset (the training data) to modify the system until the differences between the expected output and the actual output values are minimized.4 Training algorithms allow for neural networks to be constructed with optimal weights, which lets the neural network make accurate predictions when presented with new data. One such training algorithm is the backpropagation algorithm. In this design, the algorithm analyzes the gradient vector and the error surface in the data until a minimum is found.1 The difficult part of the backpropagation algorithm is determining the step size. Larger steps can result in faster runtimes, but can overstep the solution; comparatively smaller steps can lead to a much slower runtime, but are more likely to find a correct solution.1

While feedforward neural network designs like MLP are common, there are many other neural network designs. These other structures include examples such as recurrent neural networks, which allow for connections between neurons in the same layer, and self-organizing maps, in which neurons attain weights that retain characteristics of the input. All of these network types also have variations within their specific frameworks.5 The Hopfield network and Boltzmann machine neural network architectures utilize the recurrent neural network design.5 While feedforward neural networks are the most common, each design is uniquely suited to solve specific problems.


One of the main problems with neural networks is that, for the most part, they have limited ability to identify causal relationships explicitly. Developers of neural networks feed these networks large swathes of data and allow for the neural networks to determine independently which input variables are most important.10 However, it is difficult for the network to indicate to the developers which variables are most important in calculating the outputs. While some techniques exist to analyze the relative importance of each neuron in a neural network, these techniques still do not present as clear of a causal relationship between variables as can be gained in similar data analysis methods such as a logistic regression.10

Another problem with neural networks is the tendency to overfit. Overfitting of data occurs when a data analysis model such as a neural network generates good predictions for the training data but worse ones for testing data.10 Overfitting happens because the model accounts for irregularities and outliers in the training data that may not be present across actual data sets. Developers can mitigate overfitting in neural networks by penalizing large weights and limiting the number of neurons in hidden layers.10 Reducing the number of neurons in hidden layers reduces overfitting but also limits the ability of the neural network to model more complex, nonlinear relationships.10


Artificial neural networks allow for processing of large amounts of data, making them useful tools in many fields of research. For example, the field of bioinformatics relies heavily on neural network pattern recognition to predict various proteins’ secondary structures. One popular algorithm used for this purpose is Position Specific Iterated Basic Local Alignment Search Tool (PSI-BLAST) Secondary Structure Prediction (PSIPRED).6 This algorithm uses a two-stage structure that consists of two three-layered feedforward neural networks. The first stage of PSIPRED involves inputting a scoring matrix generated by using the PSI-BLAST algorithm on a peptide sequence. PSIPRED then takes 15 positions from the scoring matrix and uses them to output three values that represent the probabilities of forming the three protein secondary structures: helix, coil, and strand.6 These probabilities are then input into the second stage neural network along with the 15 positions from the scoring matrix, and the output of this second stage neural network includes three values representing more accurate probabilities of forming helix, coil, and strand secondary structures.6

Neural networks are used not only to predict protein structures, but also to analyze genes associated with the development and progression of cancer. More specifically, researchers and doctors use artificial neural networks to identify the type of cancer associated with certain tumors. Such identification is useful for correct diagnosis and treatement of each specific cancer.7 These artificial neural networks enable researchers to match genomic characteristics from large datasets to specific types of cancer and predict these types of cancer.7 (What to put in/ what to get out/process)

In bioinformatic scenarios such as the above two examples, trained artificial neural networks quickly provide high-quality results for prediction tasks.6 These characteristics of neural networks are important for bioinformatics projects because bioinformatics generally involves large quantities of data that need to be interpreted both effectively and efficiently.6

The applications of artificial neural networks are also viable within fields outside the natural sciences, such as finance. These networks can be used to predict subtle trends such as variations in the stock market or when organizations will face bankruptcy.8,9 Neural networks can provide more accurate predictions more efficiently than other prediction models.9


Over the past decade, artificial neural networks have become more refined and are being used in a wide variety of fields. Artificial neural networks allow researchers to find patterns in the largest of datasets and utilize the patterns to predict potential outcomes. These artificial neural networks provide a new computational way to learn and understand diverse assortments of data and allow for a more accurate and effective grasp of the world.


  1. Taiwo Oladipupo Ayodele (2010). Types of Machine Learning Algorithms, New Advances in Machine Learning, Yagang Zhang (Ed.), InTech, DOI: 10.5772/9385. Available from:
  2. Neural Networks: An Introduction By Berndt Muller, Joachim Reinhardt
  3. Urbas, John V. Article
  4. : Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015 
  5. Elements of Artificial Neural Networks by Kishan Mehrotra, Chilukuri Mohan
  6. Neural Networks in Bioinformatics by Ke Chen, Lukasz A. Kurgan
  7. Artificial Neural Networks in the cancer genomics frontier by Andrew Oustimov, Vincent Vu
  8. An enhanced artificial neural network for stock price predictions by Jiaxin Ma
  9. A comparison of artificial neural network model and logistics regression in prediction of companies’ bankruptcy by Ali Mansouri
  10. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes by Jack V. Tu


Optimizing Impulse and Chamber Pressure in Hybrid Rockets


Optimizing Impulse and Chamber Pressure in Hybrid Rockets


Hybrid rockets—rockets that use a liquid oxidizer and solid fuel cylindrical grains—are currently experiencing a resurgence in research rocketry due to their comparative safety benefit.1 The unique design of a hybrid rocket enables fuel and oxidizer input regulation, and thus modulation of the combustion chamber pressure.2 This reduces the risk of explosion.3 This paper will give a basic overview of the function of a hybrid rocket, the role of injector plate geometry and rocket fuel on thrust, and the results of the Rice Eclipse research team on studying the effect of injector plate geometries and rocket fuel combinations on thrust and impulse. The purpose of this research is to discover a fuel grain and injector plate combination with the thrust necessary to launch a hybrid rocket into suborbital space.

Solid Rockets

Most entry-level, low-hazard rockets use solid motors.4 Solid rockets are generally considered to be the safest option because of the consistent burn profile.5 These rockets have a solid cylinder of fuel in their combustion chamber that contains a blend of rocket fuel and oxidizer.5 Through the course of flight, the fuel/oxidizer blend gradually depletes like a high power candle until the rocket reaches its apogee.5 Since the fuel and oxidizer are initially mixed together, it is highly unlikely for a solid rocket to have a concentration of fuel necessary for instantaneous combustion, which would result in an explosion.5

Liquid Rockets

Typical rockets that are deployed in space are liquid rockets.6 These rockets contain tanks of liquid oxidizer and liquid fuel that are atomized in the combustion chamber to burn at the high efficiencies required to achieve the impulse necessary for escape velocity.7 Particularly, the atomization provides the high surface area-volume ratio that is necessary for an efficient burn and allows the rocket to have the extremely high thrust. The disadvantage of liquid rockets is the huge safety risk they pose.7 Having a liquid combustion system makes the oxidizer and fuel dangerously close to blending, which can create a concentration of oxidizer-fuel mixture susceptible to a spark and resultant explosion.

Hybrid Rockets

Hybrid rockets combine the best of both solid and liquid rockets.6 The liquid oxidizer of the hybrid rocket is atomized over the solid fuel to give a high-thrust yet controlled burn in the combustion chamber.2 Although the sophistication of hybrid rocket engineering prevents most novice rocket builders from constructing hybrids, Rice Eclipse has constructed the fifth amateur hybrid rocket in America—which we call the MK1.

Injector Plates

Injector plates are metallic structures that function like spray guns and divide the stream of oxidizer into thousands of small atomized parts.8 A variety of designs or geometries exist that serve to break up oxidizer flow; the designs we considered in this study are the showerhead and impinging designs.


Showerhead injectors function similarly to household showerheads.4 A series of radially placed holes taper inwards as they move through the injector plate, confining the oxidizer fluid to a very small space before releasing it as a spray in the combustion chamber.8 The fluid atomizes because the oxidizer accelerates as it travels through the constrained small holes but suddenly decelerates as it reaches into the combustion chamber due to the rapid change in pressure.8 This process of breaking up liquid streams due to sudden resistance to flow is called the venturi effect.8

Impinging Plates

The second type of injector plate studied is an impinging injector plate4. In this style of injector plate, the holes of the plate are placed facing one another.9 As the oxidizer flows through the holes of the plate, the streams impinge, or collide at a central location.9 Upon collision, the streams atomize.4

It is hypothesized that this plate structure should result in much better performance because of greater atomization compared to a corresponding showerhead plate.4 For this project, the angle of the impinging holes was chosen to be 30 degrees from the normal in order to optimize impingement and atomization at the end of the pre-combustion chamber.9

Fuel Grains

Rocket fuels are often made of various materials that complement each other’s chemical properties to produce a high efficiency burn.10 These fuel components are held together in a cylindrical grain through the use of a binder compound that is also consumed in combustion11 Therefore, it is important for both the standard fuel components and the binder to burn efficiently.11 The efficiency of a burn is quantified in the fuel regression rate, which is how fast the fuel grain is depleted.12 While this rate varies based on combustibility and other chemical properties, it also heavily depends on the surface area available for burning.12 Fuels with high surface area, like those in a liquid or gaseous state, can achieve high regression rates.12 Thus, hybrid and solid rocket enthusiasts have been attempting to develop high surface area grains for efficient burns; this is has been previously achieved by using exotic grain configurations designed to maximize the exposure of the grain.12 Rice Eclipse has taken the different approach by using a standard cylindrical fuel grain that incorporate high regression rate liquefying paraffin with conventional solid rocket fuel. These fuel grains were combusted with a nitrous oxide oxidizer.

Paraffin Fuel

Hydroxyl-terminated polybutadiene (HTPB), is the most commonly used rocket fuel for both hybrid and solid rocket motors.13 In solid rockets, the physical properties of HTPB make it an ideal chemical to both bind the oxidizer into a strong yet elastic fuel grain and serve as source of fuel.12 However, HTPB does not burn with efficiencies required to accelerate rockets into orbital velocities.14 To improve pure HTPB grains, researchers have experimented with the addition of paraffin, a waxy compound that burns with a higher regression rate than HTPB, in the fuel grain.15 Under the high temperatures of the combustion chamber, solid paraffin wax forms a thin layer of low surface tension liquid on the face of the fuel grain cylinder that is exposed to the oxidizer.16 The layer of liquid vaporizes due to the high flow rate and pressure of the oxidizer, producing the large surface-area-to-volume ratio that is common in solid and liquid rockets.16 This liquefaction phenomena allows paraffin to produce high regression rate fuels in both hybrid and solid motors.16 However, paraffin by itself cannot be molded into a fuel grain due to its low viscosity.16 Thus, the inclusion of HTPB enables the production of a moldable fuel grain that possesses the high regression rate of paraffin wax.17

Materials and Methods

These tests were conducted in Houston, Texas in the MK1 test motor. The maximum combustion chamber pressure of MK1 was set to 500 psi. The motor used a load cell for thrust measurements and an internal pressure sensor for the combustion chamber profile. Each test fire lasted for four seconds, and three fires were conducted per configuration to ensure reproducibility and consistency of data.

We tested two types of fuel grains with HTPB and paraffin grains at 0% paraffin/100% HTPB and 50% paraffin/50% HTPB. All of these tests utilized a nitrous oxide oxidizer. Each of these grain types were cast in the Rice University, Oshman Engineering Design Kitchen.

The injector plates were made out of stock steel and were machined in the Rice University, Oshman Engineering Design Kitchen. The values used to drive the design of the injector plate are the desired mass flow rate of the oxidizer: 0.126 kg/s and the desired pressure drop across the injector plate: 1.72 MPa.

Graphite nozzles with an entrance diameter of 1.52 in, a throat diameter of 0.295 in, and an exit diameter of 0.65 in were used. Each nozzle is 1.75 in long and has a converging half angle of 40 degrees and a diverging half angle of 12 degrees.


Three different fuel and injector plate combinations were studied. We performed a base case test of 0% paraffin/100% HTPB in a shower head plate. We then studied the effect of adding an impinging plate to the 0% paraffin/100% HTPB grain and went on to test a 50% paraffin/50% HTPB on the shower head palate. The reason we tested these configurations is to see how having a paraffin blended fuel grain and adding an impinging plate independently affected our rocket performance. The three scatter plots below show the thrust from each of the grains during a test fire. Thrust has a directly proportional relationship to the specific impulse of the rocket.


50% Paraffin Test

The 50% paraffin grain showed a significant improvement compared to the 0% paraffin base case, increasing the average thrust by 58% from 380 lbf to about 600 lbf. The paraffin fuel grain also improved the consistency of the burn due to the even spread of the paraffin grains in the fuel. Although chamber pressure did increase from about 23 psi to 38 psi, this increase in pressure is well below the 50 psi operating capacity of the rocket and would not be a handicap for the fuel grain.

Impinging Plate

The third test fire, which demonstrated the impinging plate, maintained an average thrust of 700 lbf at maximum capacity—the highest average thrust. This is because the impinging injector plate increases the atomization of the oxidizer and the surface area available for combustion, intensifying the resulting burn. This increase in burn efficiency also reduces the overall burn time of the fuel and in this case shortened the fire to about two seconds from a four second burn in the base case.


The data show that the impinging injector was successful at achieving higher thrust burn. The paraffin fuels also demonstrated improved performance from the traditional HTBP fuel grains. This improvement in performance likely results from the reduced energy barrier to vaporization in the paraffin fuels compared to HTPB. The combination of improved vaporization and atomization allowed the impinging injector plate test results to show significantly better maximum thrust than all other tested plate combinations. Future testing can focus on combining the impinging plate with different concentrations of paraffin to take full advantage of increased atomization and surface area.


  1. Spurrier, Zachary (2016) "Throttleable GOX/ABS Launch Assist Hybrid Rocket Motor for Small Scale Air Launch Platform". All Graduate Theses and Dissertations, 1, 1-72.
  2. Alkuam, E. and Alobaidi, W. (2016) Experimental and Theoretical Research Review of Hybrid Rocket Motor Techniques and Applications. Advances in Aerospace Science and Technology, 1, 71-82.
  3. Forsyth, Jacob Ward, (2016) "Enhancement of Volumetric Specific Impulse in HTPB/Ammonium Nitrate Mixed Hybrid Rocket Systems". All Graduate Plan B and other Reports, 876, 1-36.
  4. European Space Agency, (2017) "Solid and Liquid Fuel Rockets".
  5. Whitmore S.A., Walker S.D., Merkley D.P., Sobbi M,  (2015) “High regression rate hybrid rocket fuel grains with helical port structures”, Journal of Propulsion and Power, 31, 1727-1738.
  6. Thomas J. Rudman, (2002) “The Centaur Upper Stage Vehicle”, International Conference on Launcher Technology-Space Launcher Liquid Propulsion, 4, 1-22.
  7. D. K. Barrington and W. H. Miller, (1970) "A review of contemporary solid rocket motor performance prediction techniques", Journal of Spacecraft and Rockets, 7, 225-237.
  8. Isakowitz, Steven J  International Reference Guide to Space Launch Systems; American Institute of Aeronautics and Astronautics: Washington D.C., 1999;
  9. Benjamin Waxman, Brian Cantwell, and Greg Zilliac, (2012) "Effects of Injector Design and Impingement Techniques on the Atomization of Self-Pressurizing Oxidizers", 48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Joint Propulsion Conferences, 6, 1-12
  10. Silant'yev, A.I. Solid Rocket Propellants Defense Technical Information Center [Online], August 22, 1967, (accessed Feb. 9, 2017).
  11. F. M. Favaró, W. A. Sirignano, M. Manzoni, and L. T. DeLuca.  (2013) "Solid-Fuel Regression Rate Modeling for Hybrid Rockets", Journal of Propulsion and Power, 29, 205-215
  12. Lengellé, G., Duterque, J., and Trubert, J.F. (2002) “Combustion of Solid Propellants” North atlantic Treaty Organization Research and Technology Organization Educational Notes, 23, 27-31.
  13. Dario Pastrone, (2012) “Approaches to Low Fuel Regression Rate in Hybrid Rocket Engines” International Journal of Aerospace Engineering, 2012,1-12.
  14. Boronowsky, Kenny Michael, (2011) "Non-homogeneous Hybrid Rocket Fuel for Enhanced Regression Rates Utilizing Partial Entrainment" San Jose State University Master's Theses. Paper 4039, 3-110.
  15. McCulley, Jonathan M, (2013) "Design and Testing of Digitally Manufactured Paraffin Acrylonitrile-Butadiene-Styrene Hybrid Rocket Motors"All Graduate Theses and Dissertations from Utah State University, 1450, 1-89.
  16. T Chai “Rheokinetic analysis on the curing process of HTPB-DOA- MDI binder system”  Institute of Physics: Materials Science and Engineering, 147, 1-8.
  17. Sutton, G. Rocket propulsion elements; New York: John Wiley & Sons, 2001
  18. Isakowitz, Steven J  International Reference Guide to Space Launch Systems; American Institute of Aeronautics and Astronautics: Washington D.C., 1999



The Creation of Successful Scaffolds for Tissue Engineering


The Creation of Successful Scaffolds for Tissue Engineering


Tissue engineering is a broad field with applications ranging from pharmaceutical testing to total organ replacement. Recently, there has been extensive research on creating tissue that is able to replace or repair natural human tissue. Much of this research focuses on the creation of scaffolds that can both support cell growth and successfully integrate with the surrounding tissue. This article will introduce the concept of a scaffold for tissue engineering; discuss key areas of research including biomolecule use, vascularization, mechanical strength, and tissue attachment; and introduce some important recent advancements in these areas.


Tissue engineering relies on four main factors: the growth of appropriate cells, the introduction of the proper biomolecules to these cells, the attachment of the cells to an appropriate scaffold, and the application of specific mechanical and biological forces to develop the completed tissue.1

Successful cell culture has been possible since the 1960’s, but these early methods lacked the adaptability necessary to make functioning tissues. With the introduction of induced pluripotent stem cells in 2008, however, researchers have not faced the same resource limitation previously encountered. As a result, the growth of cells of a desired type has not been limiting to researchers in tissue engineering and thus warrants less concern than other factors in contemporary tissue engineering.2,3

Similarly, the introduction of essential biomolecules (such as growth factors) to the developing tissue has generally not restricted modern tissue engineering efforts. Extensive research and knowledge of biomolecule function as well as relatively reliable methods of obtaining important biomolecules have allowed researchers to make engineered tissues more successfully emulate functional human tissue using biomolecules.4,5 Despite these advancements in information and procurement methods, however, the ability of biomolecules to improve engineered tissue often relies on the structure and chemical composition of the scaffold material.6

Cellular attachment has also been a heavily explored field of research. This refers specifically to the ability of the engineered tissue to seamlessly integrate into the surrounding tissue. Studies in cellular attachment often focus on qualities of scaffolds such as porosity as well as the introduction of biomolecules to encourage tissue union on the cellular level. Like biomolecule effectiveness, successful cellular attachment depends on the material and structure of the tissue scaffolding.7

Also critical to developing functional tissue is exposing it to the right environment. This development of tissue properties via the application of mechanical and biological forces depends strongly on finding materials that can withstand the required forces while supplying cells with the necessary environment and nutrients. Previous research in this has focused on several scaffold materials for various reasons. However, improvements to the material or the specific methods of development are still greatly needed in order to create functional implantable tissue. Because of the difficulty of conducting research in this area, devoted efforts to improving these methods remain critical to successful tissue engineering.

In order for a scaffold to be capable of supporting cells until the formation of a functioning tissue, it is necessary to satisfy several key requirements, principally introduction of helpful biomolecules, vascularization, mechanical function, appropriate chemical and physical environment, and compatibility with surrounding biological tissue.8,9 Great progress has been made towards satisfying many of these conditions, but further research in the field of tissue engineering must address challenges with existing scaffolds and improve their utility for replacing or repairing human tissue.

Key Research Areas of Scaffolding Design


Throughout most early tissue engineering projects, researchers focused on simple cell culture surrounding specific material scaffolds.10 Promising developments such as the creation of engineered cartilage motivated further funding and interest in research. However, these early efforts missed out on several crucial factors to tissue engineering that allow implantable tissue to take on more complex functional roles. In order to create tissue that is functional and able to direct biological processes alongside nearby natural tissue, it is important to understand the interactions of biomolecules with engineered tissue.

Because the ultimate goal of tissue engineering is to create functional, implantable tissue that mimics biological systems, most important biomolecules have been explored by researchers in the medical field outside of tissue engineering. As a result, a solid body of research exists describing the functions and interactions of various biomolecules. Because of this existing information, understanding their potential uses in tissue engineering relies mainly on studying the interactions of biomolecules with materials which are not native to the body; most commonly, these non-biological materials are used as scaffolding. To complicate the topic further, biomolecules are a considerably large category encompassing everything from DNA to glucose to proteins. As such, it is most necessary to focus on those that interact closely with engineered tissue.

One type of biomolecule that is subject to much research and speculation in current tissue engineering is the growth factor.11 Specific growth factors can have a variety of functions from general cell proliferation to the formation of blood cells and vessels.12-14 They can also be responsible for disease, especially the unchecked cell generation of cancer.15 Many of the positive roles have direct applications to tissue engineering. For example, Transforming Growth Factor-beta (TGF-β) regulates normal growth and development in humans.16 One study found that while addition of ligands to engineered tissue could increase cellular adhesion to nearby cells, the addition also decreased the generation of the extracellular matrix, a key structure in functional tissue.17 To remedy this, the researchers then tested the same method with the addition of TGF-β. They saw a significant increase in the generation of the extracellular matrix, improving their engineered tissue’s ability to become functional faster and more effectively. Clearly, a combination of growth factors and other tissue engineering methods can lead to better outcomes for functional tissue engineering.

With the utility of growth factors established, delivery methods become very important. Several methods have been shown as effective, including delivery in a gelatin carrier.18 However, some of the most promising procedures rely on the scaffolding’s properties. One set of studies mimicked the natural release of growth factors through the extracellular matrix by creating a nanofiber scaffold containing growth factors for delayed release.19 The study saw an positive influence on the behavior of cells as a result of the release of growth factor. Other methods vary physical properties of the scaffold such as pore size to trigger immune pathways that release regenerative growth factors, as will be discussed later. The use of biomolecules and specifically growth factors is heavily linked to the choice of scaffolding material and can be critical to the success of an engineered tissue.


Because almost all tissue cannot survive without proper oxygenation, engineered tissue vascularization has been a focus of many researchers in recent years to optimize chances of engineered tissue success.20 For many of the areas of advancement, this process depends on the scaffold.21 The actual requirements for level and complexity of vasculature vary greatly based on the type of tissue; the requirements for blood flow in the highly vascularized lungs are different than those for cortical bone.22,23 Therefore, it is more appropriate for this topic to address the methods which have been developed for creating vascularized tissue rather than the actual designs of specific tissues.

One method that has shown great promise is the use of modified 3D printers to cast vascularized tissue.24 This method uses the relatively new printing technology to create carbohydrate glass networks in the form of the desired vascular network. The network is then coated with a hydrogel scaffold to allow cells to grow. The carbohydrate glass is then dissolved from inside of the hydrogel, leaving an open vasculature in a specific shape. This method has been successful in achieving cell growth in areas of engineered tissue that would normally undergo necrosis. Even more remarkably, the created vasculature showed the ability to branch into a more complex system when coated with endothelial cells.24

However, this method is not always applicable. Many tissue types require scaffolds that are more rigid or have different properties than hydrogels. In this case, researchers have focused on the effect of a material’s porosity on angiogenesis.7,25 Several key factors have been identified for blood vessel growth, including pore size, surface area, and endothelial cell seeding similar to that which was successful in 3D printed hydrogels. Of course, many other methods are currently being researched based on a variety of scaffolds. Improvements on these methods, combined with better research into the interactions of vascularization with biomaterial attachment, show great promise for engineering complex, differentiated tissue.

Mechanical Strength

Research has consistently demonstrated that large-scale cell culture is not limiting to bioengineering. With the introduction of technology like bioreactors or three-dimensional cell culture plates, growing cells of the desired qualities and in the appropriate form continues to become easier for researchers; this in turn allows for a focus on factors beyond simply gathering the proper types of cells.2 This is important because most applications in tissue engineering require more than just the ability to create groupings of cells—the cells must have a certain degree of mechanical strength in order to functionally replace tissue that experiences physical pressure.

The mechanical strength of a tissue is a result of many developmental factors and can be classified in different ways, often based on the type of force applied to the tissue or the amount of force the tissue is able to withstand. Regardless, mechanical strength of a tissue primarily relies on the physical strength of the tissue and its ability for its cells to function under an applied pressure; these are both products of the material and fabrication methods of the scaffolding used. For example, scaffolds in bone tissue engineering are often measured for compressive strength. Studies have found that certain techniques, such as cooking in a vacuum oven, may increase compressive strength.26 One group found that they were able to match the higher end of the possible strength of cancellous (spongy) bone via 3D printing by using specific molecules within the binding layers.27 This simple change resulted in scaffolding that displayed ten times the mechanical strength of scaffolding with traditional materials, a value within the range for natural bone. Additionally, the use of specific binding agents between layers of scaffold resulted in increased cellular attachment, the implications of which will be discussed later.27 These changes result in tissue that is more able to meet the functional requirements and therefore to be easily used as a replacement for bone. Thus, simple changes in materials and methods used can drastically increase the mechanical usability of scaffolds and often have positive effects on other important qualities for certain types of tissue.

Clearly, not all designed tissues require the mechanical strength of bone; contrastingly for contrast, the brain experiences less than one kPa of pressure compared to the for bone’s 106 kPa pressure bones experience.28 Thus, not all scaffolds must support the same amount of pressure, and scaffolds must be made accordingly to accommodate for these structural differences. Additionally, other tissues might experience forces such as tension or torsion based on their locations within the body. This means that mechanical properties must be looked at on a tissue-by-tissue basis in order to determine their corresponding scaffolding structures. But mechanical limitations are only a primary factor in bone, cartilage, and cardiovascular engineered tissue, the latter of which has significantly more complicated mechanical requirements.29

Research in the past few years has investigated increasingly complex aspects of scaffold design and their effects on macroscopic physical properties. For example, it is generally accepted that pore size and related surface area within engineered bone replacements are key to cellular attachment. However, recent advances in scaffold fabrication techniques have allowed researchers to investigate very specific properties of these pores such as their individual geometry. In one recent study, it was found that using an inverse opal geometry--an architecture known for its high strength in materials engineering--for pores led to a doubling of mineralization within a bone engineering scaffold.30 Mineralization is a crucial quality of bone because of its contribution to compressive strength.31 This result is so important because it demonstrates the recent ability of researchers to alter scaffolds on a microscopic level in order to affect macroscopic changes in tissue properties.

Attachment to Nearby Tissue

Even with an ideal design, a tissue’s success as an implant relies on its ability to integrate with the surrounding tissue. For some types of tissue, this is simply a matter of avoiding rejection by the host through an immune response.32 In these cases, it is important to choose materials with a specific consideration for reducing this immune response. Over the past several decades, it has been shown that the key requirement for biocompatibility is the use of materials that are nearly biologically inert and thus do not trigger a negative response from natural tissue.33 This is based on the strategy which focuses on minimizing the immune response of tissue surrounding the implant in order to avoid issues such as inflammation which might be detrimental to the patient undergoing the procedure. This method has been relatively effective for implants ranging from total joint replacements to heart valves.

Avoiding a negative immune response has proven successful for some medical fields. However, more complex solutions involving a guided immune response might be necessary for engineered tissue implants to survive and take on the intended function. This issue of balancing biochemical inertness and tissue survival has led researchers to investigate the possibility of using the host immune response in an advantageous way for the success of the implant.34 This method of intentionally triggering surrounding natural tissue relies on the understanding that immune response is actually essential to tissue repair. While an inert biomaterial may be able to avoid a negative reaction, it will also discourage a positive reaction. Without provoking some sort of response to the new tissue, an implant will remain foreign to bordering tissue; this means that the cells cannot take on important functions, limiting the success of any biomaterial that has more than a mechanical use.

Current studies have focused primarily on modifying surface topography and chemistry to target a positive immune reaction in the cells surrounding the new tissue. One example is the grafting of oligopeptides onto the surface of an implant to stimulate macrophage response. This method ultimately leads to the release of growth factors and greater levels of cellular attachment because of the chemical signals involved in the natural immune response.35 Another study found that the use of a certain pore size in the scaffold material led to faster and more complete healing in an in vivo study using rabbits. Upon further investigation, it was found that the smaller pore size was interacting with macrophages involved in the triggered immune response; this interaction ultimately led more macrophages to differentiate into a regenerative pathway, leading to better and faster healing of the implant with the surrounding tissue.36 Similar studies have investigated the effect of methods such as attaching surface proteins with similarly enlightening results. These and other promising studies have led to an increased awareness of chemical signaling as a method to enhance biomaterial integration with larger implications including faster healing time and greater functionality.


The use of scaffolds for tissue engineering has been the subject of much research because of its potential for extensive utilization in the medical field. Recent advancements have focused on several areas, particularly the use of biomolecules, improved vascularization, increases in mechanical strength, and attachment to existing tissue. Advancements in each of these fields have been closely related to the use of scaffolding. Several biomolecules, especially growth factors, have led to a greater ability for tissue to adapt as an integrated part of the body after implantation. These growth factors rely on efficient means of delivery, notably through inclusion in the scaffold, in order to have an effect on the tissue. The development of new methods and refinement of existing ones has allowed researchers to successfully vascularize tissue on multiple types of scaffolds. Likewise, better methods of strengthening engineered tissue scaffolds before cell growth and implantation have allowed for improved functionality, especially under mechanical forces. Modifications to scaffolding and the addition of special molecules have allowed for increased cellular attachment, improving the efficacy of engineered tissue for implantation. Further advancement in each of these areas could lead to more effective scaffolds and the ability to successfully use engineered tissue for functional implants in medical treatments.


  1. “Tissue Engineering and Regenerative Medicine.” National Institute of Biomedical Imaging and Bioengineering. N.p., 22 July 2013. Web. 29 Oct. 2016.
  2. Haycock, John W. “3D Cell Culture: A Review of Current Approaches and Techniques.” 3D Cell Culture: Methods and Protocols. Ed. John W. Haycock. Totowa, NJ: Humana Press, 2011. 1–15. Web.
  3. Takahashi, Kazutoshi, and Shinya Yamanaka. “Induction of Pluripotent Stem Cells from Mouse Embryonic and Adult Fibroblast Cultures by Defined Factors.” Cell 126.4 (2006): 663–676. ScienceDirect. Web.
  4. Richardson, Thomas P. et al. “Polymeric System for Dual Growth Factor Delivery.” Nat Biotech 19.11 (2001): 1029–1034. Web.
  5. Liao, IC, SY Chew, and KW Leong. “Aligned Core–shell Nanofibers Delivering Bioactive Proteins.” Nanomedicine 1.4 (2006): 465–471. Print.
  6. Elliott Donaghue, Irja et al. “Cell and Biomolecule Delivery for Tissue Repair and Regeneration in the Central Nervous System.” Journal of Controlled Release: Official Journal of the Controlled Release Society 190 (2014): 219–227. PubMed. Web.
  7. Murphy, Ciara M., Matthew G. Haugh, and Fergal J. O’Brien. “The Effect of Mean Pore Size on Cell Attachment, Proliferation and Migration in Collagen–glycosaminoglycan Scaffolds for Bone Tissue Engineering.” Biomaterials 31.3 (2010): 461–466. Web.
  8. Sachlos, E., and J. T. Czernuszka. “Making Tissue Engineering Scaffolds Work. Review: The Application of Solid Freeform Fabrication Technology to the Production of Tissue Engineering Scaffolds.” European Cells & Materials 5 (2003): 29-39-40. Print.
  9. Chen, Guoping, Takashi Ushida, and Tetsuya Tateishi. “Scaffold Design for Tissue Engineering.” Macromolecular Bioscience 2.2 (2002): 67–77. Wiley Online Library. Web.
  10. Vacanti, Charles A. 2006. “The history of tissue engineering.” Journal of Cellular and Molecular Medicine 10 (3): 569-576.
  11. Depprich, Rita A. “Biomolecule Use in Tissue Engineering.” Fundamentals of Tissue Engineering and Regenerative Medicine. Ed. Ulrich Meyer et al. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. 121–135. Web.
  12. Laiho, Marikki, and Jorma Keski-Oja. “Growth Factors in the Regulation of Pericellular Proteolysis: A Review.” Cancer Research 49.10 (1989): 2533. Print.
  13. Morstyn, George, and Antony W. Burgess. “Hemopoietic Growth Factors: A Review.” Cancer Research 48.20 (1988): 5624. Print.
  14. Yancopoulos, George D. et al. “Vascular-Specific Growth Factors and Blood Vessel Formation.” Nature 407.6801 (2000): 242–248. Web.
  15. Aaronson, SA. “Growth Factors and Cancer.” Science 254.5035 (1991): 1146. Web.
  16. Lawrence, DA. “Transforming Growth Factor-Beta: A General Review.” European cytokine network 7.3 (1996): 363–374. Print.
  17. Mann, Brenda K, Rachael H Schmedlen, and Jennifer L West. “Tethered-TGF-β Increases Extracellular Matrix Production of Vascular Smooth Muscle Cells.” Biomaterials 22.5 (2001): 439–444. Web.
  18. Malafaya, Patrícia B., Gabriela A. Silva, and Rui L. Reis. “Natural–origin Polymers as Carriers and Scaffolds for Biomolecules and Cell Delivery in Tissue Engineering Applications.” Matrices and Scaffolds for Drug Delivery in Tissue Engineering 59.4–5 (2007): 207–233. Web.
  19. Sahoo, Sambit et al. “Growth Factor Delivery through Electrospun Nanofibers in Scaffolds for Tissue Engineering Applications.” Journal of Biomedical Materials Research Part A 93A.4 (2010): 1539–1550. Web.
  20. Novosel, Esther C., Claudia Kleinhans, and Petra J. Kluger. “Vascularization Is the Key Challenge in Tissue Engineering.” From Tissue Engineering to Regenerative Medicine- The Potential and the Pitfalls 63.4–5 (2011): 300–311. Web.
  21. Drury, Jeanie L., and David J. Mooney. “Hydrogels for Tissue Engineering: Scaffold Design Variables and Applications.” Synthesis of Biomimetic Polymers 24.24 (2003): 4337–4351. Web.
  22. Lafage-Proust, Marie-Helene et al. “Assessment of Bone Vascularization and Its Role in Bone Remodeling.” BoneKEy Rep 4 (2015): n. pag. Web.
  23. Türkvatan, Aysel et al. “Multidetector CT Angiography of Renal Vasculature: Normal Anatomy and Variants.” European Radiology 19.1 (2009): 236–244. Web.
  24. Miller, Jordan S. et al. “Rapid Casting of Patterned Vascular Networks for Perfusable Engineered Three-Dimensional Tissues.” Nature Materials 11.9 (2012): 768–774. Web.
  25. Lovett, Michael et al. “Vascularization Strategies for Tissue Engineering.” Tissue Engineering. Part B, Reviews 15.3 (2009): 353–370. Web.
  26. Cox, Sophie C. et al. “3D Printing of Porous Hydroxyapatite Scaffolds Intended for Use in Bone Tissue Engineering Applications.” Materials Science and Engineering: C 47 (2015): 237–247. ScienceDirect. Web.
  27. Fielding, Gary A., Amit Bandyopadhyay, and Susmita Bose. “Effects of Silica and Zinc Oxide Doping on Mechanical and Biological Properties of 3D Printed Tricalcium Phosphate Tissue Engineering Scaffolds.” Dental Materials 28.2 (2012): 113–122. ScienceDirect. Web.
  28. Engler, Adam J. et al. “Matrix Elasticity Directs Stem Cell Lineage Specification.” Cell 126.4 (2006): 677–689. ScienceDirect. Web.
  29. Bilodeau, Katia, and Diego Mantovani. “Bioreactors for Tissue Engineering: Focus on Mechanical Constraints. A Comparative Review.” Tissue Engineering 12.8 (2006): 2367–2383. (Atypon). Web.
  30. Sommer, Marianne R. et al. “Silk Fibroin Scaffolds with Inverse Opal Structure for Bone Tissue Engineering.” Journal of Biomedical Materials Research Part B: Applied Biomaterials (2016): n/a-n/a. Wiley Online Library. Web.
  31. Sapir-Koren, Rony, and Gregory Livshits. “Bone Mineralization and Regulation of Phosphate Homeostasis.” IBMS BoneKEy 8.6 (2011): 286–300. Web.
  32. Boehler, Ryan M., John G. Graham, and Lonnie D. Shea. “Tissue Engineering Tools for Modulation of the Immune Response.” BioTechniques 51.4 (2011): 239–passim. PubMed Central. Web.
  33. Follet, H. et al. “The Degree of Mineralization Is a Determinant of Bone Strength: A Study on Human Calcanei.” Bone 34.5 (2004): 783–789. PubMed. Web.
  34. Franz, Sandra et al. “Immune Responses to Implants – A Review of the Implications for the Design of Immunomodulatory Biomaterials.” Biomaterials 32.28 (2011): 6692–6709. ScienceDirect. Web.
  35. Kao, Weiyuan John, and Damian Lee. “In Vivo Modulation of Host Response and Macrophage Behavior by Polymer Networks Grafted with Fibronectin-Derived Biomimetic Oligopeptides: The Role of RGD and PHSRN Domains.” Biomaterials 22.21 (2001): 2901–2909. ScienceDirect. Web.
  36. Bryers, James D, Cecilia M Giachelli, and Buddy D Ratner. “Engineering Biomaterials to Integrate and Heal: The Biocompatibility Paradigm Shifts.” Biotechnology and bioengineering 109.8 (2012): 1898–1911. Web.


Molecular Mechanisms Behind Alzheimer’s Disease and Epilepsy


Molecular Mechanisms Behind Alzheimer’s Disease and Epilepsy


Seizures are characterized by periods of high neuronal activity and are caused by alterations in synaptic function that disrupt the equilibrium between excitation and inhibition in neurons. While often associated with epilepsy, seizures can also occur after brain injuries and interestingly, are common in Alzheimer’s patients. While Alzheimer’s patients rarely show the common physical signs of seizures, recent research has shown that electroencephalogram (EEG) technology can detect nonconvulsive seizures in Alzheimer’s patients. Furthermore, patients with Alzheimer’s have a 6- to 10-fold increase in the probability of developing seizures during the course of their disease compared to healthy controls.2 While previous research has focused on the underlying molecular mechanisms of Aβ tangles in the brain, the research presented here relates seizures to the cognitive decline in Alzheimer’s patients in an attempt to find therapeutic approaches that tackle both epilepsy and Alzheimer’s.


The hippocampus is found in the temporal lobe and is involved in the creation and consolidation of new memories. It is the first part of the brain to undergo neurodegeneration in Alzheimer’s disease, and as such, the disease is characterized by memory loss. Alzheimer’s is different than other types of dementia because patients’ episodic memories are affected strongly and quickly. Likewise, patients who suffer from epilepsy also exhibit neurodegeneration in their hippocampi and have impaired episodic memories. Such similarities led researchers to hypothesize that the two diseases have the same pathophysiological mechanisms. In one study, four epileptic patients exhibited progressive memory loss that clinically resembled Alzheimer’s disease.6 In another study, researchers found that seizures precede cognitive symptoms in late-onset Alzheimer’s disease.7 This led researchers to hypothesize that a high incidence of seizures increases the rate of cognitive decline in Alzheimer’s patients. However, much is yet to be discovered about the molecular mechanisms underlying seizure activity and cognitive impairments.

Amyloid precursor protein (APP) is the precursor molecule to Aβ, the polypeptide that makes up the Aβ plaques found in the brains of Alzheimer’s patients. In many Alzheimer’s labs, the J20 APP mouse model of disease is used to simulate human Alzheimer’s. These mice overexpress the human form of APP, develop amyloid plaques, and have severe deficits in learning and memory. The mice also have high levels of epileptiform activity and exhibit spontaneous seizures that are characteristic of epilepsy.11 Understanding the long-lasting effects of these seizures is important in designing therapies for a disease that is affected by recurrent seizures. Thus, comparing the APP mouse model of disease with the temporal lobe epilepsy (TLE) mouse model is essential in unraveling the mysteries of seizures and cognitive decline.

Shared Pathology of the Two Diseases

The molecular mechanisms behind the two diseases are still unknown and under much research. An early observation in both TLE and Alzheimer’s involved a decrease in calbindin-28DK, a calcium buffering protein, in the hippocampus.10 Neuronal calcium buffering and calcium homeostasis are well-known to be involved in learning and memory. Calcium channels are involved in synaptic transmission, and a high calcium ion influx often results in altered neuronal excitability and calcium signaling. Calbindin acts as a buffer for binding free Ca2+ and is thus critical to calcium homeostasis.

Some APP mice have severe seizures and an extremely high loss of calbindin, while other APP mice exhibit no loss in calbindin. The reasons behind this is unclear, but like patients, mice are also very variable.

The loss of calbindin in both Alzheimer’s and TLE is highly correlated with cognitive deficits. However, the molecular mechanism behind the calbindin loss is unclear. Many researchers are now working to uncover this mechanism in the hopes of preventing the calbindin loss, thereby improving therapeutic avenues for Alzheimer’s and epilepsy patients.

Seizures and Neurogenesis

The dentate gyrus is one of the two areas of the adult brain that exhibit neurogenesis.13 Understanding neurogenesis in the hippocampus can lead to promising therapeutic targets in the form of neuronal replacement therapy. Preliminary research in Alzheimer’s and TLE has shown changes in neurogenesis over the course of the disease.14 However, whether neurogenesis is increased or decreased remains a controversial topic, as studies frequently contradict each other.

Many researchers study neurogenesis in the context of different diseases. In memory research, neurogenesis is thought to be involved in both memory formation and memory consolidation.12 Alzheimer’s leads to the gradual decrease in the generation of neural progenitors, the stem cells that can differentiate to create a variety of different neuronal and glial cell types.8 Further studies have shown that the neural stem cell pool undergoes accelerated depletion due to seizure activity.15 Initially, heightened neuronal activity stimulates neural progenitors to divide rapidly at a much faster rate than controls. This rapid division depletes the limited stem cell pool prematurely. Interestingly enough, this enhanced neurogenesis is detected long before other AD-linked pathologies. When the APP mice become older, the stem cell pool is depleted to a point where neurogenesis occurs much slower compared to controls.9 This is thought to represent memory deficits, in that the APP mice can no longer consolidate new memories as effectively. The same phenomenon occurs in mice with TLE.

The discovery of this premature neurogenesis in Alzheimer’s disease has many therapeutic benefits. For one, enhanced neurogenesis can be used as a marker for Alzheimer’s long before any symptoms are present. Furthermore, targeting increased neurogenesis holds potential as a therapeutic avenue, leading to better remedies for preventing the pathological effects of recurrent seizures in Alzheimer’s disease.


Research linking epilepsy with other neurodegenerative disorders is still in its infancy, and leaves many researchers skeptical about the potential to create a single therapy for multiple conditions. Previous EEG studies recorded Alzheimer’s patients for a few hours at a time and found limited epileptiform activity; enhanced overnight technology has shown that about half of Alzheimer’s patients have epileptiform activity in a 24-hour period, with most activity occurring during sleep1. Recording patients for even longer periods of time will likely raise this percentage. Further research is being conducted to show the importance of seizures in enhancing cognitive deficits and understanding Alzheimer’s disease, and could lead to amazing therapeutic advances in the future.


  1. Vossel, K. A. et. al. Incidence and Impact of Subclinical Epileptiform Activity. Ann Neurol. 2016.
  2. Pandis, D. Scarmeas, N. Seizures in Alzheimer Disease: Clinical and Epidemiological Data. Epilepsy Curr. 2012. 12(5), 184-187.
  3. Chin, J. Scharfman, H. Shared cognitive and behavioral impairments in epilepsy and Alzheimer’s disease and potential underlying mechanisms. Epilepsy & Behavior. 2013. 26, 343-351.
  4. Carter, D. S. et. al. Long-term decrease in calbindin-D28K expression in the hippocampus of epileptic rats following pilocarpine-induced status epilepticus. Epilepsy Res. 2008. 79(2-3), 213-223.
  5. Jin, K. et. al. Increased hippocampal neurogenesis in Alzheimer’s Disease. Proc Natl Acad Sci. 2004. 101(1), 343-347.
  6. Ito, M., Echizenya, N., Nemoto, D., & Kase, M. (2009). A case series of epilepsy-derived memory impairment resembling Alzheimer disease. Alzheimer Disease and Associated Disorders, 23(4), 406–409.
  7. Picco, A., Archetti, S., Ferrara, M., Arnaldi, D., Piccini, A., Serrati, C., … Nobili, F. (2011). Seizures can precede cognitive symptoms in late-onset Alzheimer’s disease. Journal of Alzheimer’s Disease: JAD, 27(4), 737–742.
  8. Zeng, Q., Zheng, M., Zhang, T., & He, G. (2016). Hippocampal neurogenesis in the APP/PS1/nestin-GFP triple transgenic mouse model of Alzheimer’s disease. Neuroscience, 314, 64–74.
  9. Lopez-Toledano, M. A., Ali Faghihi, M., Patel, N. S., & Wahlestedt, C. (2010). Adult neurogenesis: a potential tool for early diagnosis in Alzheimer’s disease? Journal of Alzheimer’s Disease: JAD, 20(2), 395–408.
  10. Palop, J. J., Jones, B., Kekonius, L., Chin, J., Yu, G.-Q., Raber, J., … Mucke, L. (2003). Neuronal depletion of calcium-dependent proteins in the dentate gyrus istightly linked to Alzheimer’s disease-related cognitive deficits. Proceedings of the National Academy of Sciences of the United States of America, 100(16), 9572–9577.
  11. Research Models: J20. AlzForum: Networking for a Cure.
  12. Kitamura, T. Inokuchi, K. (2014). Role of adult neurogenesis in hippocampal-cortical memory consolidation. Molecular Brain 7:13. 10.1186/1756-6606-7-13.
  13. Piatti, V. Ewell, L. Leutgeb, J. Neurogenesis in the dentate gyrus: carrying the message or dictating the tone. Frontiers in Neuroscience 7:50. doi: 10.3389/fnins.2013.00050
  14. Noebels, J. (2011). A Perfect Storm: Converging Paths of Epilepsy and Alzheimer’s Dementia Intersect in the Hippocampal Formation. Epilepsia 52, 39-46. doi:  10.1111/j.1528-1167.2010.02909.x
  15. Jasper, H.; In Jasper’s Basic Mechanisms of the Epilepsies, 4; Rogawski, M., et al., Eds.; Oxford University Press: USA, 2012


Detection of Gut Inflammation and Tumors Using Photoacoustic Imaging


Detection of Gut Inflammation and Tumors Using Photoacoustic Imaging


Photoacoustic imaging is a technique in which contrast agents absorb photon energy and emit signals that can be analyzed by ultrasound transducers. This method allows for unprecedented depth imaging that can provide a non-invasive alternative to current diagnostic tools used to detect internal tissue inflammation.1 The Rice iGEM team strove to use photoacoustic technology and biomarkers to develop a noninvasive method of locally detecting gut inflammation and colon cancer. As a first step, we genetically engineered Escherichia coli to express near-infrared fluorescent proteins iRFP670 and iRFP713 and conducted tests using biomarkers to determine whether expression was confined to a singular local area.


In photoacoustic imaging, laser pulses of a specific, predetermined wavelength (the excitation wavelength) activate and thermally excite a contrast agent such as a pigment or protein. The heat makes the contrast agent contract and expand producing an ultrasonic emission wavelength longer than the excitation wavelength used. The emission wavelength data are used to produce 2D or 3D images of tissues that have high resolution and contrast.2

The objective of this photoacoustic imaging project is to engineer bacteria to produce contrast agents in the presence of biomarkers specific to gut inflammation and colon cancer and ultimately to deliver the bacteria into the intestines. The bacteria will produce the contrast agents in response to certain biomarkers and lasers will excite the contrast agents, which will emit signals in local, targeted areas, allowing for a non-invasive imaging method. Our goal is to develop a non-invasive photoacoustic imaging delivery method that uses engineered bacteria to report gut inflammation and identify colon cancer. To achieve this, we constructed plasmids that have a nitric-oxide-sensing promoter (soxR/S) or a hypoxia-sensing promoter (narK or fdhf) fused to genes encoding near-infrared fluorescent proteins or violacein with emission wavelengths of 670 nm (iRFP670) and 713 nm (iRFP713). Nitric oxide and hypoxia, biological markers of gut inflammation in both mice and humans, would therefore promote expression of the desired iRFPs or violacein.3,4

Results and Discussion


To test the inducibility and detectability of our iRFPs, we used pBAD, a promoter that is part of the arabinose operon located in E. coli.5 We formed genetic circuits consisting of the pBAD expression system and iRFP670 and iRFP713 (Fig. 1a). AraC, a constitutively produced transcription regulator, changes form in the presence of arabinose sugar, allowing for the activation of the pBAD promoter.

CT Figure 1b.jpg

Fluorescence levels emitted by the iRFPs increased significantly when placed in wells containing increasing concentrations of arabinose (Figure 2). This correlation suggests that our selected iRFPs fluoresce sufficiently when promoters are induced by environmental signals. The results of the arabinose assays showed that we successfully produced iRFPs; the next steps were to engineer bacteria to produce the same iRFPs under nitric oxide and hypoxia.

Nitric Oxide

The next step was to test the nitric oxide induction of iRFP fluorescence. We used a genetic circuit consisting of a constitutive promoter and the soxR gene, which in turn expresses the SoxR protein (Figure 1b). In the presence of nitric oxide, SoxR changes form to activate the promoter soxS, which activates the expression of the desired gene. The source of nitric oxide added to our engineered bacteria samples was diethylenetriamine/nitric oxide adduct (DETA/NO).

Figure 3 shows no significant difference of fluorescence/OD600 between DETA/NO concentrations. This finding implies that our engineered bacteria were unable to detect the nitric oxide biomarker and produce iRFP; future troubleshooting includes verifying promoter strength and correct sample conditions. Furthermore, nitric oxide has an extremely short half-life of a few seconds, which may not be enough time for most of the engineered bacteria to sense the nitric oxide, limiting iRFP production and fluorescence.

CT Figure 1c.jpg


We also tested the induction of iRFP fluorescence with the hypoxia-inducible promoters narK and fdhf. We expected iRFP production and fluorescence to increase when using the narK and fdhf promoters in anaerobic conditions (Figure 1c and d).

However, we observed the opposite result. A decreased fluorescence for both iRFP constructs in both promoters was measured when exposed to hypoxia (Figure 4). This finding suggests that our engineered bacteria were unable to detect the hypoxia biomarker and produce iRFP; future troubleshooting includes verifying promoter strength and correct sample conditions.

Future Directions

Further studies include testing the engineered bacteria co-cultured with colon cancer cells and developing other constructs that will enable bacteria to sense carcinogenic tumors and make them fluoresce for imaging and treatment purposes.

Violacein has anti-cancer therapy potential

Violacein is a fluorescent pigment for in vivo photoacoustic imaging in the near-infrared range and shows anti-tumoral activity6. It has high potential for future work in bacterial tumor targeting. We have succeeded in constructing violacein using Golden Gate shuffling7 and intend to use it in experiments such as the nitric oxide and hypoxia assays we used for iRFP670 and 713.

Invasin can allow for targeted cell therapy

Using a beta integrin called invasin, certain bacteria are able to invade mammalian cells.8-9 If we engineer E. coli that have the beta integrin invasion as well as the genetic circuits capable of sensing nitric oxide and/or hypoxia, we can potentially allow the E. coli to invade colon cells and release contrast agents for photoacoustic imaging or therapeutic agents such as violacein only in the presence of specific biomarkers.10 Additionally, if we engineer the bacteria that exhibit invasin to invade colon cancer cells only and not normal cells, then this approach would potentially allow for a localized targeting and treatment of cancerous tumors. This design allows us to create scenarios with parameters more similar to the conditions observed in the human gut as we will be unable to test our engineered bacteria in an actual human gut.


The International Genetically Engineered Machine (iGEM) Foundation ( is an independent, non-profit organization dedicated to education and competition, the advancement of synthetic biology, and the development of an open community and collaboration.

This project would not have been possible without the patient instruction and generous encouragement of our Principal Investigators (Dr. Beth Beason-Abmayr and Dr. Jonathan Silberg, BioSciences at Rice), our graduate student advisors and our undergraduate team. I would also like to thank our iGEM collaborators.

This work was supported by the Wiess School of Natural Sciences and the George R. Brown School of Engineering and the Departments of BioSciences, Bioengineering, and Chemical and Biomolecular Engineering at Rice University; Dr. Rebecca Richards-Kortum, HHMI Pre-College and Undergraduate Science Education Program Grant #52008107; and Dr. George N. Phillips, Jr., Looney Endowment Fund.

If you would like to know more information about our project and our team, please visit our iGEM wiki at


  1. Ntziachristos, V. Nat Methods. 2010, 7, 603-614.
  2. Weber, J. et al. Nat Methods. 2016, 13, 639-650.
  3. Archer, E. J. et al. ACS Synth. Biol. 2012, 1, 451–457.
  4. Hӧckel, M.; Vaupel, P. JNCI J Natl Cancer Inst. 2001, 93, 266−276.
  5. Guzman, L. M. et al. J of Bacteriology. 1995, 177, 4121-4130.
  6. Shcherbakova, D. M.; Verkhusha, V. V. Nat Methods. 2013, 10, 751-754.
  7. Engler, C. et al. PLOS One. 2009, 4, 1-9.
  8. Anderson, J. et al. Sci Direct. 2006, 355, 619–627
  9. Arao, S. et al. Pancreas. 2000, 20, 619-627.
  10. Jiang, Y. et al. Sci Rep. 2015, 19, 1-9.


A Fourth Neutrino? Explaining the Anomalies of Particle Physics


A Fourth Neutrino? Explaining the Anomalies of Particle Physics


The very first neutrino experiments discovered that neutrinos exist in three flavors and can oscillate between those flavors as they travel through space. However, many recent experiments have collected anomalous data that contradicts a three neutrino flavor hypothesis, suggesting instead that there may exist a fourth neutrino, called the sterile neutrino, that interacts solely through the gravitational force. While there is no conclusive evidence proving the existence of a fourth neutrino flavor, scientists designed the IceCube laboratory at the South Pole to search for this newly hypothesized particle. Due to its immense size and sensitivity, the IceCube laboratory stands as the most capable neutrino laboratory to corroborate the existence of these particles.


Neutrinos are subatomic, ubiquitous, elementary particles that are produced in a variety of ways. Some are produced from collisions in the atmosphere between different particles, while others result from the decomposition and decay of larger atoms.1,3 Neutrinos are thought to play a role in the interactions between matter and antimatter; furthermore, they are thought to have significantly influenced the formation of the universe.3 Thus, neutrinos are of paramount concern in the world of particle physics, with the potential of expanding our understanding of the universe. When they were first posited, neutrinos were thought to have no mass because they have very little impact on the matter around them. However, decades later, it was determined that they have mass but only interact with other matter in the universe through the weak nuclear force and gravity.2

Early neutrino experiments found that measuring the number of neutrinos produced from the sun resulted in a value almost one third of the predicted value. Coupled with other neutrino experiments, these observations gave rise to the notion of neutrino flavors and neutrino flavor oscillations. There are three flavors of the standard neutrino: electron (ve), muon (vμ), and tauon (v𝜏). Each neutrino is a decay product that is produced with its namesake particle; for example, ve is produced alongside an electron during the decay process.9 Neutrino oscillations were also proposed after these results, stating that if a given type of neutrino is produced during decay, then at a certain distance from that spot, the chance of observing that neutrino with the properties of a different flavor becomes non-zero.2 Essentially, if ve is produced, then at a sufficient distance, the neutrino may become either vμ or v𝜏. This is caused by a discrepancy in the flavor and mass eigenstates of neutrinos.

In addition to these neutrino flavor states, there are also three mass eigenstates, or states in which neutrinos have definite mass. Through experimental evidence, these two different states represent two properties of neutrinos. As a result, neutrinos of the same flavor can be of different masses. For example, two electron neutrinos will have the same definite flavor, but not necessarily the same definite mass state. It is this discrepancy in the masses of these particles that actually leads to their ability to oscillate between flavors with the probability function given by the formula P(ab) = sin2(2q)sin2(1.27Dm2LvEv-1), where a and b are two flavors, q is the mixing angle, Dm is the difference in the mass eigenstate values of the two different neutrino flavors, L is the distance from source to detector, and E is the energy of the neutrino.6 Thus, each flavor is a different linear combination of the three states of definite mass.

The equation introduces the important concept of the mixing angle, which defines the difference between flavor and mass states and accounts for neutrino flavor oscillations. Thus, if the mixing angle were zero, this would imply that the mass states and and flavor states were the same and therefore no oscillations could occur. For example, all muon neutrinos produced at a source would still be muon neutrinos when P(mb) = 0. On the other hand, at a mixing angle of π/4, when P(mb) = 1, all muon neutrinos would oscillate to the other flavors in the probability function.9

Anomalous Data

Some experimental data has countered the notion of three neutrino flavor oscillations.3 If the experimental interpretation is correct, it would point to the existence of a fourth or even an additional fifth mass state, opening up the possibility of other mass states that can be taken by the hypothesised sterile neutrino. The most conclusive anomalous data arises from the Liquid Scintillator Neutrino Detector (LSND) Collaboration and MiniBooNE. The LSND Collaboration at Los Alamos National Laboratory looked for oscillations between vm neutrinos produced from muon decay and ve neutrinos. The results showed a lower-than-expected probability of oscillation.6 These results highly suggest either an oscillation to another neutrino flavor. A subsequent experiment at Fermilab called the mini Booster Neutrino Experiment (MiniBooNE) again saw a discrepancy between predicted and observed values of ve appearance with an excess of ve events.7 All of these results have a low probability of fit when compared to the standard model of particle physics, which gives more plausibility to the hypothesis of the existence of more than three neutrino flavors.

GALLEX, an experiment measuring neutrino emissions from the sun and chromium-51 neutrino sources, as well as reactor neutrino experiments gave inconsistent data that did not coincide with the standard model’s predictions for neutrinos. This evidence merely suggests the presence of these new particles, but does not provide conclusive evidence for their existence.4,5 Thus, scientists designed a new project at the South Pole to search specifically for newly hypothesized sterile neutrinos.

IceCube Studies

IceCube, a particle physics laboratory, was designed specifically for collecting data concerning sterile neutrinos. In order to collect conclusive data about the neutrinos, IceCube’s vast resources and acute precision allow it to detect and register a large number of trials quickly. Neutrinos that come into contact with IceCube’s detectors are upgoing atmospheric neutrinos and thus have already traversed the Earth. This allows a fraction of the neutrinos to pass through the Earth’s core. If sterile neutrinos exist, then the large gravitational force of the Earth’s core should cause some muon neutrinos that traverse it to oscillate into sterile neutrinos, resulting in fewer muon neutrinos detected than expected in a model containing only three standard mass states, and confirming the existence of a fourth flavor.3

For these particles that pass upward through IceCube’s detectors, the Earth filters out the charged subatomic particle background noise, allowing only the detection of muons (the particles of interest) from neutrino interactions. The small fraction of upgoing atmospheric neutrinos that enter the ice surrounding the detector site will undergo reactions with the bedrock and ice to produce muons. These newly created muons then traverse the ice and react again to produce Cherenkov light, a type of electromagnetic radiation, that is finally able to be detected by the Digital Optical Modules (DOMs) of IceCube. This radiation is produced when a particle having mass passes through a substance faster than light can pass through that same substance.8

In 2011-2012, a study using data from the full range of DOMs, rather than just a portion, was conducted.8 This data, along with other previous data, were examined in order to search for conclusive evidence of sterile neutrino oscillations in samples of atmospheric neutrinos. Experimental data were compared to a Monte Carlo simulation. For each hypothesis of the makeup of the sterile neutrino, the Poissonian log likelihood, a probability function that finds the best correlation of experimental data to a hypothetical model, was calculated. Based on the results shown in Figure 2, no evidence points towards sterile neutrinos.8


Other studies have also been conducted at IceCube, and have also found no indication of sterile neutrinos. Although there is strong evidence against the existence of sterile neutrinos, this does not completely rule out their existence. These experiments have focused only on certain mixing angles and may have different results for different mixing angles. Also, if sterile neutrinos are conclusively found to be nonexistent by IceCube, there is still the question of why the anomalous data appeared at LSND and MiniBooNE. Thus, IceCube will continue sterile neutrino experiments at variable mixing angles to search for an explanation to the anomalies observed in the previous neutrino experiments.


  1. Fukuda, Y. et al. Evidence for Oscillation of Atmospheric Neutrinos. Phys. Rev. Lett. 1998, 81, 1562.
  2. Beringer, J. et al. Review of Particle Physics. Phys. Rev. D. 2012, 86, 010001.
  3. Schmitz, D. W. Viewpoint: Hunting the Sterile Neutrino. Physics. [Online] 2016, 9, 94.
  4. Hampel, W. et al. Final Results of the 51Cr Neutrino Source Experiments in GALLEX. Phys. Rev. B. 1998, 420, 114.
  5. Mention, G. et al. Reactor Antineutrino Anomaly. Phys. Rev. D. 2011, 83, 073006.
  6. Aguilar-Arevalo, A. A. et al. Evidence for Neutrino Oscillations for the Observation of ve Appearance in a vμ Beam. Phys. Rev. D. 2001, 64, 122007.
  7. Aguilar-Arevalo, A. A. et al. Phys. Rev. Lett. 2013, 110, 161801.
  8. Aartsen, M. G. et al. Searches for Sterile Neutrinos with the IceCube Detector. Phys. Rev. Lett. 2016, 117, 071801.