Viewing entries tagged
Physical Sciences

Developments in Gold Nanoparticles and Cancer Therapy


Developments in Gold Nanoparticles and Cancer Therapy


Nanotechnology has recently produced several breakthroughs in localized cancer therapy. Specifically, directing the accumulation of gold nanoparticles (GNPs) in cancerous tissue enables the targeted release of cytotoxic drugs and enhances the efficacy of established cancer therapy methods. This article will give a basic overview of the structure and design of GNPs, the role of GNPs in drug delivery and localized cancer therapy, and the challenges in developing and using GNPs for cancer treatment.


Chemotherapy is currently the most broadly utilized method of treatment for most subtypes of cancer. However, cytotoxic chemotherapy drugs are limited by their lack of specificity; chemotherapeutic agents target all of the body’s most actively dividing cells, giving rise to a number of dangerous side effects.1 GNPs have recently attracted interest due to their ability to act as localized cancer treatments—they offer a non-cytotoxic, versatile, specific targeting mechanism for cancer treatment and a high binding affinity for a wide variety of organic molecules.2 Researchers have demonstrated the ability to chemically modify the surfaces of GNPs to induce binding to specific pharmaceutical agents, biomacromolecules, and malignant cell tissues. This allows GNPs to deliver therapeutic agents at tumor sites more precisely than standard intravenous chemotherapy can. GNPs also increase the efficiency of established cancer therapy methods, such as hyperthermia.3 This article will briefly cover the design and characteristics of GNPs, and then outline both the roles of GNPs in cancer therapy and the challenges in implementing GNP-based treatment options.

Design and Characteristics of Gold Nanoparticles

Nanoparticle Structure

To date, gold nanoparticles have been developed in several shapes and sizes.4 Although GNPs have also successfully been synthesized as rods, triangles, and hexagons, spherical GNPs have been demonstrated to be one of the most biocompatible nanoparticle models. GNP shape affects accumulation behavior in cells.4 A study by Tian et. al. found that hexagonal GNPs produced a greater rate of vesicular aggregation than both spherical and triangular GNPs.

Differences in GNP shape also cause variation in surface area and volume, which affects cellular uptake, biocompatibility, and therapeutic efficiency.4 For example, GNPs with greater surface area or more vertices possess enhanced cell binding capabilities but also heightened cell toxicity. Clearly, GNP nanoparticles must be designed with respect to their intended function.

Nanoparticle Surface Modification

In order to target specific cells or tissues, GNPs must undergo a ligand attachment process known as surface modification. The types of ligand particles attached to a GNP affect its overall behavior. For example, ligand particles consisting of inert molecular chains can stabilize nanoparticles against inefficient aggregation.5 Polyethylene glycol (PEG) is a hydrocarbon chain that stabilizes GNPs by repelling other molecules using steric effects; incoming molecules are unable to penetrate the PEG-modified surface of the GNPs.5 Certain ligand sequences can enable a GNP to strongly bind to a target molecule by molecular recognition, which is determined by geometric matching of the surfaces of the two molecules.5

Tumor cells often express more cell surface receptors than normal cells; targeting these receptors for drug delivery increases drug accumulation and therapeutic efficacy.6 However, the receptors on the surface of tumor cells must be exclusive to cancerous cells in order to optimize nanoparticle and drug targeting. For example, most tumor cells have integrin receptors.7 To target these residues, the surfaces of the therapeutic GNPs can be functionalized with the arginine-glycine-aspartic acid (RGD) sequence, which binds to key members within the integrin family.8 Successful targeting can lead to endocytosis and intracellular release of the therapeutic elements that the GNPs carry.

An important factor of GNP therapy is the efficient targeting and release of remedial agents at the designated cancerous site. There are two types of GNP targeting: passive and active. In passive targeting, nanoparticles accumulate at a specific site by physicochemical factors (e.g. size, molecular weight, and shape), extravasation, or pharmacological factors. Release can be triggered by internal factors such as pH changes or external stimuli such as application of light.2 In active targeting, ligand molecules attached to the surface of a GNP render it capable of effectively delivering pharmaceutical agents and large biomacromolecules to specific cells in the body.

Gold Nanoparticles in Localized Cancer Therapy


Hyperthermia is a localized cancer therapy in which cancerous tissue is exposed to high temperatures to induce cell death. Placing gold nanoparticles at the site of therapy can improve the efficiency and effectiveness of hyperthermia, leading to lower levels of tumor growth. GNPs aggregated at cancerous tissues allow intense, localized increases in temperature that better induce cell death. In one study on mice, breast tumor tissue containing aggregated GNPs experienced a temperature increase 28°C higher than control breast tumor tissue when subjected to laser excitation.9 While the control tissue had recurring cancerous growth, the introduction of GNPs significantly increased the therapeutic temperature of the tumors and permanently damaged the cancerous tissue.

Organelle Targeting

GNPs are also capable of specifically targeting malfunctioning organelles in tumor cells, such as nuclei or mitochondria. The nucleus is an important target in localized cancer therapy since it controls the processes of cell growth, proliferation, and apoptosis, which are commonly defective in tumor cells. Accumulation of GNPs inside nuclei can disrupt faulty nuclear processes and eventually induce cell apoptosis. The structure of the GNP used to target the nucleus determines the final effect. For example, small spherical and “nanoflower”-shaped GNPs compromise nuclear functioning, but large GNPs do not.10

Dysfunctional mitochondria are also valuable targets in localized therapy as they control the energy supply of tumor cells and are key regulators of their apoptotic pathways.10 Specific organelle targeting causes internal cell damage to cancerous tissue only, sparing normal tissue from the damaging effects of therapeutic agents. This makes nuclear and mitochondrial targeting a desirable treatment option that merits further investigation.

Challenges of Gold Nanoparticles in Localized Cancer Therapy

Cellular Uptake

Significant difficulties have been encountered in engineering a viable method of cellular GNP uptake. Notably, GNPs must not only bind to a given cancer cell’s surface and undergo endocytosis into the cell, but they must also evade endosomes and lysosomes.10 These obstacles are present regardless of whether the GNPs are engineered to target specific organelles or release therapeutic agents inside cancerous cells. Recent research has demonstrated that GNPs can avoid digestion by being functionalized with certain surface groups, such as polyethylenimine, that allow them to escape endosomes and lysosomes.10

Toxic Effects on Local Tissue

The cytotoxic effects of GNPs on local cells and tissues remain poorly understood.11 However, recent research developments have revealed a relationship between the shapes and sizes of GNPs and their cell toxicities. Larger GNPs have been found to be more cytotoxic than smaller ones.12 Gold nanospheres were lethal at lower concentrations, while gold nanostars were less toxic.13 While different shapes and sizes of GNPs can be beneficial in various localized cancer therapies, GNPs must be optimized on an application-by-application basis with regard to their toxicity level.


Gold nanoparticles have emerged as viable agents for cancer therapy. GNPs are effective in targeting malignant cells specifically, making them less toxic to normal cells than traditional cancer therapies. By modifying their surfaces with different chemical groups, scientists can engineer GNPs to accumulate at specific tumor sites. The shape and size of a GNP also affect its behavior during targeting, accumulation, and cellular endocytosis. After accumulation, GNPs may be used to enhance the efficacy of established cancer therapies such as hyperthermia. Alternatively, GNPs can deliver chemotherapy drugs to tumor cells internally or target specific organelles inside the cell, such as the nucleus and the mitochondria.

Although some research has shown that GNPs themselves do not produce acute cytotoxicity in cells, other research has indicated that nanoparticle concentration, shape, and size may all affect cytotoxicity. Therefore, nanoparticle design should be optimized to increase cancerous cell death but limit cytotoxicity in nearby normal cells.


  1. Estanquiero, M. et al. Colloid Surface B. 2015, 126, 631-648.
  2. Pissuwan, D. et al. J Control Release. 2009, 149, 65-71.
  3. Chatterjee, D. V. et al. Ther Deliv. 2011, 2, 1001-1014.
  4. Tian, F. et al. Nanomedicine. 2015, 10, 2643-2657.
  5. Sperling, R.A. et al. Phil Trans R Soc A. 2010, 368, 1333-1383.
  6. Amreddy, N. et al. Int J of Nanomedicine. 2015, 10, 6773-6788.
  7. Kumar, A. et al. Biotechnol Adv. 2013, 31, 593-606.
  8. Perlin, L. et al. Soft Matter. 2008, 4, 2331-2349.
  9. Jain, S. et al. Br J Radiol. 2012, 85, 101-113.
  10. [Kodiha, M. et al. Theranostics. 2015, 5, 357-370.
  11. Nel, A. et al. Science. 2006, 311, 622-627.
  12. Pan, Y. et al. Small. 2007, 3, 1941-1949.
  13. Favi, P. M. et al. J Biomed Mater Red A. 2015, 103, 3449-3462. 


How Bionic Eyes Are Changing the Way We See the World


How Bionic Eyes Are Changing the Way We See the World

Most blind people wear sunglasses, but what if their glasses could actually restore their vision? Such a feat seems miraculous, but the development of new bionic prostheses may make such miracles a reality. These devices work in two ways: by replacing non-functional parts of the visual pathway or by creating alternative neural avenues to provide vision.

When attempting to repair or restore lost vision, it is important to understand how we normally receive and process visual information. Light enters the eye and is refracted by the cornea to the lens, which focuses the light onto the retina. The cells of the retina, namely photoreceptors, convert the light into electrical impulses, which are transmitted to the primary visual cortex by the optic nerve. In short, this process serves to translate light energy into electrical energy that our brain can interpret. For patients suffering from impaired or lost vision, one of the steps in this process is either malfunctioning or not functioning at all.1,2

Many patients with non-functional vision can be treated with current surgical techniques. For example, many elderly individuals develop cataracts, in which the lens of the eye becomes increasingly opaque, resulting in blurred vision. This condition can be rectified fairly simply with a surgical replacement of the lens. However, loss of vision resulting from a problem with the retina or optic nerve can very rarely be corrected surgically due to the sensitive nature of these tissues. Such pathologies include retinitis pigmentosa, an inherited degenerative disease affecting retinal photoreceptors, and head trauma, which can damage the optic nerve. In these cases, a visual prosthesis may be the solution. These devices, often called “bionic eyes,” are designed to repair or replace damaged ocular functions. Such prostheses restore vision by targeting damaged components in the retina, optic nerve, or the brain itself.

One set of visual prostheses works by correcting impaired retinal function via electrode arrays implanted between the retinal layers. The electrodes serve as substitutes for lost or damaged photoreceptors, translating light energy to electrical impulses. The Boston Retinal Implant Project has developed a device involving an eyeglass-mounted camera and an antenna implanted in the skin near the eye.3 The camera transmits visual data to the antenna in a manner reminiscent of a radio broadcast. Then, the antenna decodes the signal and then sends it through a wire to an implanted subretinal electrode array, which relays it to the brain. The problem with this system is that the camera is fully external and unrelated to the eye’s position, meaning the patient must move his or her entire head to survey a scene. Germany’s Retinal Implant AG team seeks to rectify this problem with the Alpha IMS implant system. In this system, the camera itself is subretinal, and “converts light in each pixel into electrical currents.”2

The Alpha IMS system is still undergoing experimental clinical trials in Europe, but it is facing some complications. Firstly, the visual clarity of tested patients is around 20/1000, which is well below the standard for legal blindness. Secondly, the system’s power supply is implanted in a very high-risk surgical procedure, which can endanger patients. In an attempt to overcome the problems faced by both The Boston Retinal Implant Project and Retinal Implant AG, Dr. Daniel Palanker at Stanford and his colleagues are currently developing a subretinal prosthesis involving a goggle-mounted video camera and an implanted photodiode array. The camera receives incoming light and projects the image onto the photodiode array, which then converts the light into pulsed electrical currents. These currents stimulate nearby neurons to relay the signal to the brain. As Dr. Palanker says, “This method for delivering information is completely wireless, and it preserves the natural link between ocular movement and image perception.”2 Human clinical trials are slated to begin in 2016, but Palanker and his team are confident that the device will be able to produce 20/250 visual acuity or better in affected patients.

A potentially safer set of visual prostheses includes suprachoroidal implants. Very similar to the aforementioned subretinal implants, these devices also replace damaged components of the retina. The only difference is that suprachoroidal implants are placed between the choroid layer and the sclera, rather than between the retinal layers. This difference in location allows these devices to be surgically implanted with less risk, as they do not breach the retina itself. Furthermore, these devices are larger compared to subretinal implants, “allowing them to cover a wider visual field, ideal for navigation purposes.” Development of suprachoroidal devices began in the 1990s at both Osaka University in Japan and Seoul National University in South Korea. Dr. Lauren Ayton and Dr. David Nayagam of the Bionic Vision Australia (BVA) research partnership are heading more current research. BVA has tested a prototype of a suprachoroidal device in patients with retinitis pigmentosa, and results have been promising. Patients were able to “better localize light, recognize basic shapes, orient in a room, and walk through mobility mazes with reduced collisions.” More testing is planned for the future, along with improvements to the device’s design.2

Both subretinal and suprachoroidal implants work by replacing damaged photoreceptors, but they rely on a functional neural network between the retina and the optic nerve. Replacing damaged photoreceptors will not help a patient if he or she lacks the neural network that can transmit the signal to the brain. This neural network is composed of ganglion cells at the back of the retina that connect to the optic nerve; these ganglion cells can be viewed as the “output neurons of the eye.” A third type of visual prosthesis targets these ganglion cells. So-called epiretinal implants are placed in the final cell layer of the retina, with electrodes directly stimulating the optic nerve. Because these devices are implanted in the last retinal layer, they work “regardless of the state of the upstream neurons”.2 So the main advantage of an epiretinal implant is that, in cases of widespread retinal damage due to severe retinitis pigmentosa, the device provides a shortcut directly to the optic nerve.

The most promising example of an epiretinal device is the Argus II Visual Prosthesis System, developed by Second Sight. The device, composed of a glasses-mounted camera that wirelessly transmits visual data to an implanted microelectrode array, received FDA marketing approval in 2012. Clinical trials have shown a substantial increase in visual perception and acuity in patients with severe retinitis pigmentosa, and the system has been implanted in more than 50 patients to date.

The common limitation of all these visual prostheses (subretinal, suprachoroidal, and epiretinal) is that they rely on an intact and functional optic nerve. But some blind patients have damaged optic nerves due to head trauma. The optic nerve connects the eye to the brain, so for patients with damage in this region, bionics researchers must find a way to target the brain itself. Experiments in the early 20th century showed that, by stimulating certain parts of the brain, blind patients could perceive light flashes known as phosphenes. Building from these experiments, modern scientists are working to develop cortical prostheses implanted in either the visual cortex of the cerebrum or the lateral geniculate nucleus (LGN), both of which are key in the brain’s ability to interpret visual information. Such a device would not truly restore natural vision, but produce artificial vision through the elicitation of phosphene patterns.

One group working to develop a cortical implant is the Monash Vision Group (MVG) in Melbourne, Australia, coordinated by Dr. Collette Mann and co. MVG’s Gennaris bionic-vision system consists of a glasses-mounted camera, a small computerized vision processor, and a series of multi-electrode tiles implanted in the visual cortex. The camera transmits images to the vision processor, which converts the picture into a waveform pattern and wirelessly transmits it to the multi-electrode tiles. Each electrode on each tile can generate a phosphene; all the electrodes working in unison can generate phosphene patterns. As Dr. Mann says, “The patterns of phosphenes will create 2-D outlines of relevant shapes in the central visual field.”2 The Illinois Institute of Technology is developing a similar device called an intracortical visual prosthesis, termed the IIT ICVP. The device’s developers seek to address the substantial number of blind patients in underdeveloped countries by making the device more affordable. The institute says that “one potential advantage of the IIT ICVP system is its modularity,” and that by using fewer parts, they “could make the ICVP economically viable, worldwide.”4

These visual prostheses represent the culmination of decades of work by hundreds of researchers across the globe. They portray a remarkable level of collaboration between scientists, engineers, clinicians, and more, all for the purpose of restoring vision to those who live without it. And with an estimated 40 million individuals worldwide suffering from some form of blindness, these devices are making miracles reality.


  1. The Scientist Staff. The Eye. The Scientist, 2014, 28.
  2. Various Researchers. The Bionic Eye. The Scientist, 2014, 28.
  3. Boston Retinal Implant Project. (accessed Oct. 9, 2015)
  4. Intracortical Visual Prosthesis. (accessed Oct. 10, 2015)


From a Pile of Rice to an Avalanche: A Brief Introduction to Granular Materials


From a Pile of Rice to an Avalanche: A Brief Introduction to Granular Materials

Communities living at the foot of the Alps need a way to predict the occurrence of avalanches for timely evacuation, but monitoring the entire Alpine range is impossible. Fortunately for those near the Alps, the study of granular materials has allowed scientists to move mountains into labs and use small, contained systems (like piles of rice) to simulate real-world avalanche conditions. Granular materials, by definition, are conglomerates of discrete visible particles that lose kinetic energy during internal collisions; they are neither too small to be invisible to the naked eye, nor too big to be studied as distinct objects.1 The size of granular material situates them between common objects and individual molecules.

While studying extremely small particles, scientists stumbled upon an unsettling contradiction: the classical laws governing the macroscopic universe do not always apply at microscopic scales. For example, Niels Bohr sought to apply classical mechanics to explain the orbits of electrons around nuclei by comparing them to the rotation of planets around stars. However, it was later discovered that an electron behaves in a much more complicated way than Bohr had anticipated. At its size, the electron gained properties that could only be described through an entirely new set of laws known as quantum mechanics.

Though granular materials do not exist at the quantum level, their distinct size necessitates an analogous departure from classical thought. A new category of physical laws must be created to describe the basic interactions among particles of this unique size. Intuitively, this makes sense; anyone who has cooked rice or played with sand knows that the individual grains behave more like water than solid objects. Scientists are intrigued by these materials because of the variation in their behaviors in different states of aggregation. More importantly, since our world consists of granular materials such as coffee, beans, dirt, snowflakes, and coal, their study sheds new light on the prediction of avalanches and earthquakes.

The physical properties of granular flow vary with the concentration of grains. At different concentrations, the grains experience different magnitudes of stress and dissipate energy in different ways. Since it is hard to derive a unifying formula to describe granular flows of varying concentrations, physicists use three sets of equations to fit their states of aggregation, resembling the gaseous, liquid, and solid phases. When the material is dilute enough for each grain to randomly fluctuate and translate, it acts like a gas. When the concentration increases, particles collide more frequently and the material functions as a liquid. Since these particles do not collide elastically, a fraction of their kinetic energy dissipates into heat during each collision. The increased frequency of inelastic collisions between grains in the analogous liquid phase results in increasing energy, dissipation, and greater stress. Finally, when the concentration increases to 50% or more, the material resembles a solid. The grains experience significant contact, resulting in predominantly frictional stress and energy dissipation.1

Avalanches come in two types, flow and powder, each of which requires a specific combination of the gas, liquid, and solid granular models. In a flow avalanche, the descending layer consists of densely packed ice grains. The solid phase of granular materials best models this, meaning that friction becomes the chief analytical aspect. In a powder avalanche, particles of snow do not stick together and descend in a huge, white cloud.2 The fluid and solid models of granular materials are equally appropriate here.

Physicists can use these avalanche models to investigate the phenomena leading up to a real-world avalanche. They can simulate the disturbance of a static pile of snow by constantly adding grains to a pile, or by perturbing a layer of grains on the pile’s surface. In an experiment conducted by statistical physicists Dr. Daerr and Dr. Douady, layers of glass beads of 1.8 to 3mm in diameter were poured onto a velvet surface, launching two distinct types of avalanches under different regimes decided by the tilt angle of the plane and the thickness of the layer of glass beads.3

For those of us who are not experts in avalanches, there are a few key points to take away from Daerr and Douady. They found that a critical tilt angle exists for spontaneous avalanches. When the angle of the slope remained under the critical angle, the size of the flow did not grow, even if a perturbation caused an additional downfall of grains. Interestingly, when the angle of the slope was altered significantly, the snow uphill from the perturbation point also contributed to the avalanche. That means that avalanches can affect higher elevations than their starting points. Moreover, the study found that the angle of the remaining slope after the avalanche was always less than the original angle of the slope, indicating that after a huge avalanche, mountains would remain stable until a change in external condition occured.3 A. Often, a snow mountain with slopes exceeding the critical angle can remain static and harmless for days, because of the cohesion between particles.

Situations become complicated if the grains are not completely dry, which is what happens in real snow avalanches. In these scenarios, physicists must modify existing formulas and conduct validating experiments to predict the behaviors of these systems. Granular materials are not limited to predicting avalanches. In geophysics, scientists have investigated the relation of granular materials to earthquakes. For instance, one study used sound waves and glass beads to study the effects of earthquake aftershocks.4 Apart from traditional modeling with piles of rice or sand, the understanding of granular materials under different phases paves the way for computational modeling of large-scale natural disasters like avalanches and earthquakes. These studies will not only help us understand granular materials themselves, but also help us predict certain types of natural disasters.


  1. Jaeger, H. M., Nagel, S. R., and Behringer, R. P. Granular Solids, Liquids and Gases. Rev. Mod. Phys., 1996, 68, No.4, 1259-1273.
  2. Frankenfield, J. Types of Spring and Summer Avalanches. (accessed Oct. 29, 2015).
  3. Daerr, A. and Douady, S. Two Types of Avalanche Behaviour in Granular Media. Nature, 1999, 399, 241-243.
  4. Johnson P. A., et al. Effects of acoustic waves on stick–slip in granular media and implications for earthquakes. Nature, 2008, 451, 57-60.


Nomming on Nanotechnology: The Presence of Nanoparticles in Food and Food Packaging


Nomming on Nanotechnology: The Presence of Nanoparticles in Food and Food Packaging

Nanotechnology is found in a variety of sectors—drug administration, water filtration, and solar technology, to name a few—but what you may not know is that nanotechnology could have been in your last meal.

Over the last ten years, the food industry has been utilizing nanotechnology in a multitude of ways.1 Nanoparticles can increase opaqueness of food coloring, make white foods appear whiter, and even prevent ingredients from clumping together.1 Packaging companies now utilize nano-sized clay pieces to make bottles that are less likely to break and better able to retain carbonation.2 Though nanotechnology has proven to be useful to the food industry, some items that contain nanoparticles have not undergone any safety testing or labeling. As more consumers learn about nanotechnology’s presence in food, many are asking whether it is safe.

Since the use of nanotechnology is still relatively new to the food industry, many countries are still developing regulations and testing requirements. The FDA, for example, currently requires food companies that utilize nanotechnology to provide proof that their products won’t harm consumers, but does not require specific tests proving that the actual nanotechnology used in the products is safe.2 This oversight is problematic because while previous studies have shown that direct contact with certain nanoparticles can be harmful for the lungs and brain, much is still unknown about the effects of most nanoparticles. Currently, it is also unclear if nanoparticles in packaging can be transferred to the food products themselves. With so many uncertainties, an activist group centered in Washington, D.C. called Friends of the Earth is advocating for a ban on all use of nanotechnology in the food industry.2

However, the situation may not require such drastic measures. The results of a study last year published in the Journal of Agricultural Economics show that the majority of consumers would not mind the presence of nanotechnology in food if it makes the food more nutritious or safe.3 For example, one of the applications of nanotechnology within the food sector focuses on nanosensors, which reveal the presence of trace contaminants or other unwanted microbes.5 Additionally, nanomaterials could be used to make more impermeable packaging that could protect food from UV radiation.5

Nanotechnology could also be applied to water purification, nutrient delivery, and fortification of vitamins and minerals.5 Water filters that utilize nanotechnology incorporate carbon nanotubes and alumina fibers into their structure, which allows microscopic pieces of sediment and contaminants to be removed from the water.6 Additionally, nanosensors made using titanium oxide nanowires, which can be functionalized to change color when they come into contact with certain contaminants, can help detect what kind of sediment is being removed.6 Encapsulating nutrients on the nanoscale-level, especially in lipid or polymer-based nanoparticles, increases their absorption and circulation within the body.7 Encapsulating vitamins and minerals within nanoparticles slows their release from food, causing absorption to occur at the most optimal part of digestion.4 Coatings containing nano-sized nutrients are also being applied to foods to increase their nutritional value.7 Therefore, there are many useful applications of nanoparticles that consumers have already shown to support.

While testing and research is an ongoing process, nanotechnology is already making food safer and healthier for consumers. The FDA is currently studying the efficacy of nanotechnology in food under the 2013 Nanotechnology Regulatory Science Research Plan. Though the study has not yet been completed, the FDA has stated that in the interim, it “supports innovation and the safe use of nanotechnology in FDA-regulated products under appropriate and balanced regulatory oversight.”8,9 As nanotechnology becomes commonplace, consumers can also expect to see an increase in the application of nanotechnology in food and food packaging in the near future.


  1. Ortiz, C. Wait, There's Nanotech in My Food? (accessed November 9, 2015).
  2. Biello, D. Do Nanoparticles in Food Pose a Health Risk? (accessed October 1, 2015).
  3. Yue, C., Zhao, S. and Kuzma, J. Journal of Agricultural Economics. 2014. 66: 308–328. doi: 10.1111/1477-9552.12090
  4. Sozer, N., & Kokini, J. Trend Biotechnol. 2009. 27(2), 82-89.
  5. Duncan, T. J. Colloid Interface Sci. 2011. 363(1), 1-24.
  6. Inderscience Publishers. (2010, July 28). Nanotechnology for water purification. ScienceDaily. (accessed March 3, 2016)
  7. Srinivas, P. R., Philbert, M., Vu, T. Q., Huang, Q., Kokini, J. L., Saos, E., … Ross, S. A. (2010). Nanotechnology Research: Applications in Nutritional Sciences. The Journal of Nutrition, 140(1), 119–124.
  8. U.S. Food and Drug Administration. (accessed November 9, 2015).
  9. U.S. Food and Drug Administration. (2015). (accessed November 9, 2015).


Nano-Materials with Giga Impact


Nano-Materials with Giga Impact

What material is so diverse that it has applications in everything from improving human lives to protecting the earth? Few materials are capable of both treating prolific diseases like diabetes and creating batteries that last orders of magnitude longer than industry standards. None are as thin, lightweight, and inexpensive as carbon nanotubes.

Carbon nanotubes are molecular cylinders made entirely of carbon atoms, which form a hollow tube just a few nanometers thick, as illustrated in Figure 1. For perspective, a nanometer is one ten-thousandth the width of a human hair.1 The first multi-walled nanotubes (MWNTs) were discovered by L. V. Radushkevich and V. M. Lukyanovich of Russia in 1951.2 Morinobu Endo first discovered single-walled nanotubes (SWNTs) in 1976, although the discovery is commonly attributed to Sumio Iijima at NEC of Japan in 1991.3,4

Since their discovery, nanotubes have been the subject of extensive research by universities and national labs for the variety of applications in which they can be used. Carbon nanotubes have proven to be an amazing material, with properties that surpass those of existing alternatives such as platinum, stainless steel, and lithium-ion cathodes. Because of their unique structure, carbon nanotubes are revolutionizing the fields of energy, healthcare, and the environment.


One of the foremost applications of carbon nanotubes is in energy. Researchers at the Los Alamos National Laboratory have demonstrated that carbon nanotubes doped with nitrogen can be used to create a chemical catalyst. The process of doping involves substitution of one type of atom for another; in this case, carbon atoms were substituted with nitrogen. The synthesized catalyst can be used in lithium-air batteries which can hold a charge 10 times greater than that of a lithium-ion battery. A key parameter in the battery’s operation is the Oxygen Reduction Reaction (ORR) activity, which is a measure of a chemical species’ ability to gain electrons. The ORR activity of the nitrogen-doped material complex is not only the highest of any non-precious metal catalyst in alkaline media, but also higher than that of precious metals such as platinum.5

In another major development, Dr. James Tour of Rice University has created a graphene-carbon nanotube complex upon which a “forest” of vertical nanotubes can be grown. This base of graphene is a single, flat sheet of carbon atoms ‒ essentially a carbon nanotube “unrolled.” The ratio of height-to-base in this complex is equivalent to that of a house on a standard-sized plot of land extending into space.6 The graphene and nanotubes are joined at their interface by heptagonal carbon rings, allowing the structure to have an enormous surface area of 2000 m2 per gram and serve as a high potential storage mechanism in fast supercapacitors.7


Carbon nanotubes also show immense promise in the field of healthcare. Take Michael Strano of MIT, who has developed a sensor composed of nanotubes embedded in an injectable gel that can detect several molecules. Notably, it can detect nitrous oxide, an indicator of inflammation, and blood glucose levels, which diabetics must continuously monitor. The sensors take advantage of carbon nanotubes’ natural fluorescent properties; when these tubes are complexed with a molecule that then binds to a specific target, their fluorescence will increase or decrease.8

Perhaps the most important potential application for carbon nanotubes in healthcare lies in their cancer-fighting applications. In the human cell, there is a family of genes called HER2 that is responsible for the regulation of growth and proliferation of cells. Normal cells have two copies of this family, but 20-25% of breast cancer cells have three or more copies, resulting in quickly-growing tumor cells. Approximately 40,000 U.S. women are diagnosed every year with this type of breast cancer. Fortunately, Huixin He of Rutgers University and Yan Xiao of the National Institute of Standards and Technology have found that they can attach an anti-HER2 antibody to carbon nanotubes to kill these cells, as shown in Figure 2. Once inserted into the body, a near-infrared light at a wavelength of 785 nm can be reflected off the antibody-nanotube complex, indicating where tumor cells are present. The wavelength then increases to 808 nm, at which point the nanotubes absorb the light and vibrate to release enough heat to kill any attached HER2 tumor cells. This process has shown a near 100% success rate and leaves normal cells unharmed.9


Carbon nanotube technology also has environmental applications. Hui Ying Yang from Singapore has developed a water-purification membrane made of plasma-treated carbon nanotubes which can be integrated into portable, rechargeable, and inexpensive purification devices the size of a teapot. These new purifications devices are ideal for developing countries and remote locations, where large industrial purification plants would be too energy- and labor-intensive. Unlike other portable devices, this rechargeable device utilizes a membrane system that does not require a continuous power source, does not rely on thermal processes or reverse osmosis, and can filter for organic contaminants found in brine water - the most common water supply in these developing and rural areas.10

Oil spills may no longer be such devastating natural disasters either. Bobby Sumpter of the Oak Ridge National Laboratory demonstrated that doping carbon nanotubes with boron atoms alters the curvature of the tubes. Forty-five degree angles form, leading to a sponge-like structure of nanotubes. As these tubes are made of carbon, they attract hydrocarbons and repel water due to their hydrophobic properties, allowing the tubes to absorb up to 100 times their weight in oil. Additionally, these tubes can be reused, as burning or squeezing them was shown to cause no damage. Sumpter and his team used an iron catalyst in the growth process of the carbon nanotubes, enabling a magnet to easily control or remove the tubes from an oil cleanup scenario.11

Carbon nanotubes provide an incredible opportunity to impact areas of great importance to human life - energy, healthcare, and environmental protection. The results of carbon nanotube research in these areas demonstrate the remarkable properties of this versatile and effective material. Further studies may soon lead to their everyday appearance in our lives, whether in purifying water, fighting cancer, or even making the earth a better, cleaner place for everyone. Big impacts can certainly come in small packages.


  1. Nanocyl. Carbon Nanotubes. (accessed Sep 12, 2015).
  2. Monthioux, M.; Kuznetsov, V. Guest Editorial: Who should be given the credit for the discovery of carbon nanotubes? Carbon 44. [Online] 2006. 1621. (accessed Nov 15, 2015)
  3. Ecklund, P.; et al. Ugliengo, P. In International Assessment of Carbon Nanotube Manufacturing and Applications, Proceedings of the World Technology Evaluation Center, Inc. Baltimore, MD, June, 2007.
  4. Nanogloss. The History of Carbon Nanotubes – Who Invented the Nanotube? (accessed Sep 14, 2015).
  5. Understanding Nano. Economical non-precious-metal catalyst capitalizes on carbon nanotubes. (accessed Sep 17, 2015).
  6. Understanding Nano. James’ bond: A graphene/nanotube hybrid. (accessed Sep 19, 2015).
  7. Yan, Z. et al. ACS Nano. Toward the Synthesis of Wafer-Scale Single-Crystal Graphene on Copper Foils 2012, 6 (10), 9110–9117.
  8. Understanding Nano. New implantable sensor paves way to long-term monitoring. (accessed Sep 20, 2015).
  9. Understanding Nano. Combining Nanotubes and Antibodies for Breast Cancer 'Search and Destroy' Missions. (accessed Sep 22, 2015).
  10. Understanding Nano. Plasma-treated nano filters help purify world water supply. (accessed Sep 24, 2015).
  11. Sumpter, B. et al. Covalently bonded three-dimensional carbon nanotube solids via boron induced nanojunctions. Nature [Online] 2012, doi: 10.1038/srep00363. (accessed Mar 06, 2016).
  12.  Huixin, H. et al. Anti-HER2 IgY antibody-functionalized single-walled carbon nanotubes for detection and selective destruction of breast cancer cells. BMC Cancer 2009, 9, 351.   


Mars Fever


Mars Fever

The Greeks called it the star of Ares. For the Egyptians, it was the Horus of the Horizon. Across many Asian cultures, it was called the Fire Star. Mars has been surrounded by mystery from the time of ancient civilizations to the recent discovery of water on the planet’s surface.1 But why have humans around the world and throughout history been so obsessed with the tiny red planet?

Human fascination with Mars began in the late 19th century, when Italian astronomer Giovanni Schiaparelli first observed canali, or lines, on the planet’s surface. Yet canali was mistakenly translated as “canals” instead of “lines.”2 This led many to believe that some sort of intelligent life existed on Mars and these canals were engineered for their survival. While these lines were later found to be optical illusions, the canali revolutionized the way people viewed Mars. For perhaps the first time in history, it seemed that humans might not be alone in the universe. Schiaparelli had unintentionally sparked what became known as “Mars fever,” and indirectly influenced our desire to study, travel to, and even colonize Mars for more than 100 years. In Cosmos, Carl Sagan memorably described this odd fascination, “Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears.”

He was right.

After the canali misunderstanding, people began to believe that not only was there life on Mars, but intelligent life. The Mars craze escalated with the rise of science fiction, especially the publication of H.G. Wells’s classic Martian takeover novel War of the Worlds.2,3 In 1938, the novel was adapted for a radio broadcast narrated by actor Orson Welles. The broadcast incited mass terror, as millions of listeners mistook the fictional broadcast for news of an impending alien Armageddon. Surprisingly, this is just one of the many instances where random events have been mistaken for extraterrestrial interaction. The public image of Mars quickly evolved to reflect a mystical red landscape inhabited by intelligent, antagonistic, green creatures. Mars fever was becoming contagious.

As decades passed, it became increasingly clear that Mars contained no tiny green men and that there were no flying saucers coming to colonize the Earth. The Mariner missions found no evidence of life on Mars, and as a result, Mars fever took on a new form: without the threat of intelligent, alien life forms, who was to stop us from colonizing the Red Planet? After all, perhaps the destruction of Earth wouldn’t be caused by invaders, but by earthlings themselves. Many contemporary science fiction writers focus on this idea of a second Earth in their stories. Award-winning novelist Michael Swanwick says, "We all are running out of a lot of different minerals, some of which our civilization depends on … There is a science-fiction idea for you."4 With natural resources dwindling and pollution on the rise, Earth might need a replacement.5 Mars’ relative similarity and proximity to Earth make it a strong candidate.

Rocket scientist Werner Von Braun even wrote The Mars Project, a book outlining a Martian colonization fleet that would be assembled in earth orbit.6 It was a proposition of massive proportions, calling for $500 million in rocket fuel alone and human explorers rather than rovers such as NASA’s Opportunity and Curiosity.6 However, these colonization efforts are not simply fictional. Elon Musk, CEO of Tesla and creator of the privately funded space agency SpaceX, has put intense effort into interplanetary travel, particularly in the case of Mars, but his methods remain abstract.7 Mars One has a similar goal: establishing a Martian colony. While not an aerospace company, Mars One is a logistical center for carrying out such a mission. They focus primarily on funding and organization, leaving systems construction up to more established aerospace companies.8 While both SpaceX and Mars One are dedicated to the cause of Martian colonization, it is evident that neither company will be able to accomplish such a mission any time soon.

The possible mechanisms for colonizing Mars are endless, ranging from pioneering the landscape with 3D printable habitats to harvesting remnants of water from the Martian soil. But the challenges arguably outweigh current technologies. In order to survive, humans would need space suits that could protect against extreme temperature differentials.5 Once on the surface, astronauts would need to establish food sources that were both sustainable and suitable for long term missions.9 Scientists would need to consider accommodations for the mental health of astronauts spending more time in space than any other human in history. Beyond these basic necessities, factors like harmful cosmic rays and the sheer cost of such a mission must also be considered.10

The highly improbable nature of Mars exploration and colonization only seems to add fuel to the fire of humanity’s obsession. In spite of the challenges associated with colonization, Mars fever persists. Though Mars is 225 million kilometers away from Earth, it has piqued human curiosity throughout civilizations. Schiaparelli and his contemporaries could only dream of the possibilities that dwelled in Mars’s “canali.” However, exploration of Mars is no longer the stuff of science-fiction. This is a new era of making the impossible possible, from Neil Armstrong’s “giant leap for mankind” to the establishment of the International Space Station. We are closer to Mars than ever before, and in the coming years we might just unveil the mystery behind the Red Planet.


  1. Mars Shows Signs of Having Flowing Water, Possible Niches for Life, NASA Says,, (accessed September 28, 2015)
  2. A Short History of Martian Canals and Mars Fever,, (accessed September 28, 2015)
  3. The Myth of the War of the Worlds Panic,, (accessed October 10, 2015)
  4. Why Colonize Mars? Sci-Fi Authors Weigh In., (accessed Jan 30, 2016)
  5. Here’s why humans are so obsessed with colonizing Mars,, (accessed Oct 10, 2015)
  6. Humans to Mars,, (accessed Oct 10, 2015)
  7. SpaceX's Elon Musk to Reveal Mars Colonization Ideas This Year,, (accessed October 10, 2015)
  8. About Mars One, (accessed Jan 30, 2016)
  9. Talking to the Martians,, (accessed September 28, 2015)
  10. Will We Ever Colonize Mars?, (accessed Jan 30, 2016)


Delving into a New Kind of Science


Delving into a New Kind of Science

Since ancient times, humans have attempted to create models to explain the world. These explanations were stories, mythologies, religions, philosophies, metaphysics, and various scientific theories. Then, about three centuries ago, scientists revolutionized our understanding with a simple but powerful idea: applying mathematical models to make sense of our world. Ever since, mathematical models have come to dominate our approach to knowledge, and scientists have utilized complex equations as viable explanations of reality.

Stephen Wolfram’s A New Kind of Science (NKS) suggests a new way of modelling worldly phenomena. Wolfram postulates that elaborate mathematical models aren’t the only representations of the mechanisms governing the universe; simple patterns may be behind some of the most complex phenomena. In order to illustrate this, he began with cellular automata.

A cellular automaton is a set of colored blocks in a grid that is created stage by stage. The color of each block is determined by a set of simple rules that considers the colors of blocks in a preceding stage.1 Based on just this, cellular automata seem to be fairly simple, but Wolfram illustrated their complexity in rule 30. This cellular automaton, although it follows the simple rule illustrated in Figure 1, produces a pattern that too irregular and complex for even the most sophisticated mathematical and statistical analysis. However, by applying NKS fundamentals, simple rules and permutations of the building blocks pictured can be developed to produce these extremely complex structures or models.2

By studying several cellular automata systems, Wolfram presents two important ideas: complexity can result from simple rules and complex rules do not always produce complex patterns.2

The first point is illustrated by a computer; relying on Boolean logic, the manipulation of combinations of “truths” (1’s) and “falses” (0’s), computers can perform complex computations. And with proper extensions, they can display images, play music, or even simulate entire worlds in video games. The resulting intuition, that complexity results from complexity, is not necessarily true. Wolfram shows again and again that simple rules produce immense randomness and complexity.

There are other natural phenomena that support this theory. The patterns on mollusk shells reflect the patterns generated by cellular automata, suggesting that the shells follow similar simple rules during pattern creation.2 Perhaps other biological complexities are also results of simple rules. Efforts are being made to understand the fundamental theory of physics based on ideas presented in the NKS and Wolfram’s idea might even apply to philosophy. If simple rules can create seemingly irregular complexity, the simple neuronal impulses in the brain might also cause irregular complexities, and this is what we perceive as free will.2

The most brilliant aspect of NKS lies in its underlying premises: a model for reality is not reality itself but only a model, so there can be several different, accurate representations. Our current approach to reality -- using mathematical models to explain the world -- does not have to be the only one. Math can explain the world, but NKS shows that simple rules can also do so. There may be methods and theories that have been overlooked or remain undiscovered that can model our world in better ways.


  1. Weisstein, E. W. Cellular Automaton. Wolfram MathWorld, (accessed Mar 26, 2016).
  2. Wolfram, S. A New Kind of Science; Wolfram Media: Champaign, IL, 2002


3D Organ Printing: A Way to Liver a Little Longer


3D Organ Printing: A Way to Liver a Little Longer

On average, 22 people in America die each day because a vital organ is unavailable to them.1 However, recent advances in 3D printing have made manufacturing organs feasible for combating the growing problem of organ donor shortages.

3D printing utilizes additive manufacturing, a process in which successive layers of material are laid down in order to make objects of various shapes and geometries.2 It was first described in 1986, when Charles W. Hull introduced his method of ‘stereolithography,’ in which thin layers of materials were added by curing ultraviolet light lasers. In the past few decades, 3D printing has driven innovations in many areas, including engineering and art by allowing rapid prototyping of various structures.2 Over time, scientists have further developed 3D printing to employ biological materials as a modeling medium. Early iterations of this process utilized a spotting system to deposit cells into organized 3D matrices, allowing the engineering of human tissues and organs. This method, known as 3D bioprinting, required layer-by-layer precision and the exact placement of 3D components. The ultimate goal of 3D biological modeling is to assemble human tissue and organs that have the correct biological and mechanical properties for proper functioning to be used for clinical transplantation. In order to achieve this goal, modern 3D organ printing is usually accomplished using either biomimicry, autonomous self-assembly, and mini-tissues. Typically, a combination of all three techniques is utilized to achieve bioprinting with multiple structural and functional properties.

The first approach, biomimicry, involves the manufacture of identical components of cells and tissues. The goal of this process is to use the cells and tissues of the organ recipient to duplicate the structure of organs and the environment in which they reside. Ongoing research in engineering, biophysics, cell biology, imaging, biomaterials, and medicine is very important for this approach to prosper, as a thorough understanding of the microenvironment of functional and supporting cell types is needed to assemble organs that can survive.3

3D bioprinting can also be accomplished through autonomous self-assembly, a technique that uses the same mechanisms as embryonic organ development. Developing tissues have cellular components that produce their own extracellular matrix in order to create the structures of the cell. Through this approach, researchers hope to utilize cells themselves to create fully functional organs. Cells are the driving force of this process, as they ultimately determine the functional and structural properties of the tissues.3

The final approach used in 3D bioprinting involves mini-tissues and combines the processes of both biomimicry and self-assembly. Mini-tissues are the smallest structural units of organs and tissues. They are replicated and assembled into macro-tissue through self-assembly. Using these smaller, potentially undamaged portions of the organs, fully functional organs can be made. This approach is similar to autonomous self-assembly in that the organs are created by the cells and tissues themselves.

As modern technology makes it possible, techniques for organ printing continue to advance. Although successful clinical implementation of printed organs is currently limited to flat organs such as skin and blood vessels and hollow organs such as the bladder,3 current research is ongoing for more complex organs such as the heart, pancreas, or kidneys.

Despite the recent advances in bioprinting, issues still remain. Since cell growth occurs in an artificial environment, it is hard to supply the oxygen and nutrients needed to keep larger organs alive. Additionally, moral and ethical debates surround the science of cloning and printing organs.3 Some camps assert that organ printing manipulates and interferes with nature. Others feel that, when done morally, 3D bioprinting of organs will benefit mankind and improve the lives of millions. In addition to these debates, there is also concern about who will control the production and quality of bioprinted organs. There must be some regulation of the production of organs, and it may be difficult to decide how to distribute this power. Finally, the potential expense of 3D printed organs may limit access to lower socioeconomic classes. 3D printed organs, at least in their early years, will more likely than be expensive to produce and to buy.

Nevertheless, there is widespread excitement surrounding the current uses of 3D bioprinting. While clinical trials may be in the distant future, organ printing can currently act as an in vitro model for drug toxicity, drug discovery, and human disease modeling.4 Additionally, organ printing has applications in surgery, as doctors may plan surgical procedures with a replica of a patient’s organ made with information from MRI and CT images. Future implementation of 3D printed organs can help train medical students and explain complicated procedures to patients. Additionally, 3D printed tissue of organs can be utilized to repair livers and other damaged organs. Bioprinting is still young, but its widespread application is quickly becoming a possibility. With further research, 3D printing has the potential to save the lives of millions in need of organ transplants.


  1. U.S. Department of Health and Human Services. Health Resources and Services Information. (accessed Sept. 15, 2015)
  2. Hull, C.W. et al. Method of and apparatus for forming a solid three-dimensional article from a liquid medium. WO 1991012120 A1 (Google Patents, 1991).
  3. Atala, A. and Murphy, S. 3D Bioprinting of Tissues and Organs. Nat. Biotechnol. [Online] 2013, 32, 773-785. (accessed Sept. 15, 2015)
  4. Drake, C. Kasyanov, V., et al. Organ printing: Promises and Challenges. Regen. Med. 2008, 3, 1-11.