METALS IN MEDICINE AND THE ENVIRONMENT-Aluminum and Alzheimer’s Disease

METALS IN MEDICINE AND THE ENVIRONMENT-Aluminum and Alzheimer’s Disease

Alzheimer’s disease is characterized by a build up of extracellular neural plaque and intracellular neurofibrillary tangles in the noeocortex and hippocampus (responsible for short-term and spatial memory), resulting in decreased cognitive capacity and dementia.  Neural plaque is composed of beta-amyloid protein, which is the proteolytically cleaved form of beta-amyloid precursor protein (AbPP).    In its unaggregated a helical state, as is found in healthy cerebrospinal fluid, Ab is benign.  However, biological conditions may promote the aggregation of AbPP into fibrils, which adopt a b-pleated sheet conformation.(1)

abpp
Figure 1.  Immunohistochemical staining of AbPP.

Recently, there has been a significant amount of investigation of the role metal cations play in development of neurodegenerative disorders, especially Alzheimer’s disease.

However, through various investigative techniques and experimental approaches, much of the research conducted has yielded conflicting results.

Regardless, it is widely accepted that aluminum is a formidable neurotoxic agent, with biological roles including but certainly not limited to impaired signal transduction and free radical damage. (1) Aluminum is present in many everyday commodities, including deodorant, cookware, and food packaging.  Thus, it makes sense to investigate its possible role in the onset and development of Alzheimer’s disease.
Various neuropsychological and statistical studies have unearthed a strong correlation between excessive aluminum exposure and impaired cognitive capacity indicative of neurodegenerative disorder.  (2, 3) Molecular studies using AbPP transgenic mice reveal that an aluminum-enriched diet results in an accumulation of aluminum in the hippocampus, which is one of the first regions of the brain to suffer damage (4).  Furthermore, in a metal-rich diet, aluminum can act synergistically with copper to increase AbPP levels. (1)

hippocampus

Figure 2. The hippocampus.

Aluminum has also been suggested to act in the hyperphosphorylation of tau proteins. (6) Tau proteins stabilize microtubules, and their hyperphosphorylation promotes the aggregation found in neurofibrillary tangles.  Control of regulatory factors and cascades are of interest as well.  Transcription or translation of kinases involved in tau regulation may be affected by aluminum.
Other researchers have proposed that aluminum can actually potentiate the defibrillogenesis of beta-amyloid protein.  Chauhan et al. (5) report that though cations traditionally may act as nucleation sites for polymerization, as is the case for actin, cations like aluminum in fact act as severing factors in serum to break down Ab fibrils. 

If this is the case, aluminum may actually be a useful treatment for Alzheimer’s disease.
Conflicting hypotheses and experimental evidence complicates this research.

Clearly, if studies suggest aluminum has a pathological role in Alzheimer’s disease, while others suggest it as a possible treatment of the disease, there is much research to be done.

References

(1) Gómez, M., Esparza, J., Cabré, M., García, T., Domingo, J.  Aluminum exposure through the diet: Metal level in AbPP transgenic mice, a model for Alzheimer’s disease.  Toxicology 249, 214-219 (2008).

(2) Polizzi, S., Pira, E., Ferrara, M., Bugiani, M., Papaleo, A., Albera, R., Palmi, S.  Neurotoxic Effects of Aluminum Among Foundry Workers and Alzheimer’s Disease.  NeuroToxicology 23, 761-774 (2002).

(3) Pechansky, F., Kessler, F.H.P., von Diemen, L., Bumaguin, D.B., Surratt, H.L., Inciardi, J.A., Brazilian crack users show elevated serum     aluminum levels.  Rev Bras Psiquiatr 29(1), 39-42 (2007).

(4) http://en.wikipedia.org/wiki/Hippocampus

(5) Chauhan, V.P.S., Ray, I., Chauhan, A., Wegiel, J., Wisniewski, H.M., Metal Cations Defibrillize the Amyloid Beta-Protein Fibrils.    Neurochemical Research 22, 805-809 (1997).

(6) Domingo, J., Aluminum and other metals in Alzheimer’s disease: a review of potential therapy with chelating agents.  Journal of Alzheimer’s Disease 10, 331-341 (2006).


Arsenic is well known as a toxic element.  The World Health Organization (WHO) has set a recommended limit of 10 mg/L (reduced from 50 mg/L in 1993) concentration of arsenic in drinking water (1, 2).  The increased knowledge of arsenic toxicity, particularly carcinogenesis, led to the change.  Not until 2001 did the United States follow WHO by also reducing the acceptable arsenic concentration to 10 mg/L.  In many countries an acceptable arsenic level remains 50 mg/L.   This is in part due to inadequate testing facilities and equipment.   In some areas of the world, arsenic is present in the groundwater at concentrations higher than the set provision (1).  Natural and unnatural arsenic in the environment can lead to groundwater contamination.  Figure 1 shows areas where arsenic is present in aquifers.

Figure 1.  Map illustrating arsenic-affected aquifers (1).

The two common forms of arsenic are As(III), arsenite, and As(V), arsenate.  Arsenite is more mobile and toxic than As(V).   Arsenic can be released from sediments under anaerobic reducing conditions in the form of arsenite, and under oxidizing conditions where the pH is high, in the form of arsenate (HAsO4­­­2-) (1, 3, 4).

There are a variety of symptoms resulting from chronic arsenic exposure.  Skin lesions and hyperkeratosis are signs of arsenic poisoning (Figure 2).  These telltale signs can take as long as 10 years to appear (2, 5, 6).  Skin cancer is the most common cancer with a connection to arsenic poisoning.  Cancers of the bladder, kidney, and lung are being linked to excessive arsenic exposure.   Low dose exposure to arsenic may lead to neurological disease, cardiovascular disease, or diabetes (6).


Figure 2.  A woman shows signs of arsenic poisoning.

Arsenic in Bangladesh

Dr. Alan Smith, director of the arsenic research program at the University of California, Berkeley, states, “The contamination of groundwater by arsenic in Bangladesh is the largest poisoning of a population in history…” (6).
Beginning in the 1970’s the United Nations Children’s Fund (UNICEF) partnered with the Department of Public Health and Engineering to install tube-wells to provide a safe source of drinking water.  These wells were to replace water supplies contaminated with bacteria and responsible for causing large outbreaks of disease.  During the time the wells were installed, arsenic was not a known concern and was therefore not tested.  Today, there are between 8-12 million tube wells and 90% of the population receives their drinking water from a well (2, 6).

In the mid-1980’s there was increasing evidence that arsenic was hazardous.  However, the fact was not acknowledged until 1993 when WHO decreased the acceptable concentration (6).  The issue in Bangladesh was not addressed until 1998 when the World Bank attempted to initiate a program to screen the countries tube wells (5).

Arsenic concentrations in the range of <0.5 µg/L to 3200 µg/L were found in groundwater across Bangladesh (2, 7).  Figure 3 shows a map of Bangladesh and the concentration of arsenic in groundwater.  The estimated number of people exposed to arsenic concentrations greater than 50 µg/L is 28-35 million and the number of those exposed to more than 10 µg/L is 46-57 million (2).   Chakraborti et al. reported that of 27,000 wells sampled in Bangladesh, 59% of them exceeded 50 µg/L and 73% exceeded 10 µg/L (8).  According to some approximations, arsenic in drinking water will cause 200–270 thousand deaths from cancer in Bangladesh alone. (5).


Figure 3. Map showing concentrations of arsenic in groundwater (1).

The problem in Bangladesh is further detrimental due to the economic status of the country.  Many of the people are living in poverty and are unaware of the contamination, and cannot afford healthcare.

Linking Arsenic in the U.S. to Chickens

It is reported that the poultry industry in the United States supplements chicken feed with roxarsone, 3-nitro-4-hydroxybenzene arsonic acid.  The organic arsenic compound promotes growth and controls intestinal parasites.  Nearly all the arsenic consumed by the chickens is excreted in the manure.  Manure has been used as a nitrogen containing fertilizer for centuries and is spread onto farmland (9).   A large percentage of the nations poultry comes from the Delmarva Peninsula, with Sussex County, Delaware being the largest producing county of broiler chickens in the United States (10). Chicken manure introduces huge quantities of arsenic to agricultural land.  Poultry litter is spread on land at the rate of 9 to 20 metric tons per hectare. Each year, it is estimated, 20 to 50 metric tons of roxarsone in chicken litter is applied to fields on the Delmarva Peninsula (11).


Figure 4.   Roxarsone

Roxarsone itself is not toxic, but the compound is converted to inorganic arsenate fairly quickly.  The organic form is soluble in water can be easily leeched into groundwater or surface water. The genus Clostridium is responsible for the conversion to As(V).  In anaerobic reducing conditions, the As(V) is further converted to As(III), a more mobile and toxic form.  The processes involved in the transformation have not been investigated to date (12).  In Figure 5 the path from chicken feed to water and beyond is illustrated (13).

Manure is spread onto fields and rainwater causes run-off into rivers, streams, and lakes.  Studies report that elevated concentrations of arsenic was found in the Pocomoke River near areas spread with poultry manure.  Another study, by the Maryland Geological Survey, found arsenic levels violating federal health standards in 11 percent of 250 drinking wells sampled on the Eastern Shore and elsewhere in the state (14).  There is not a large amount of evidence linking these higher arsenic levels to chicken litter, but it cannot be ruled out. Arsenic may have an impact on aquatic life.   Arsenic represses the immune system in fish and may be contributing to the fish-kill in the Shenandoah River (15).  An additional manner of arsenic pollution is through pellets sold as garden fertilizer.  This can lead to exposure by dust (9).


Figure 5. Arsenic from chicken feed to water (13).

 

Arsenic Remediation

Due to the high solubility in water across a wide pH range, an oxidation step is required to reduce both solubility and mobility.  Manganese(IV) oxides represent one of the main redox catalysts in the environment, while also extensively adsorbing a number of anions and cations.  Their highly reactive surfaces allow manganese oxides, even in low concentrations, to oxidize metals such as arsenic from arsenite to arsenate, a much less reactive and mobile form (Figure 6). However, several variables can influence the oxidation reaction: a variety of particle sizes can be found in nature ranging from a nanometer to micrometer size particles, thus altering the exposed surface area. Iron oxides will also reduce arsenic, but much slower.  The released Mn2+ ions adsorb onto the manganese dioxide, giving it a positive surface charge and leading to an enhancement in the removal of arsenate produced as a result of arsenite oxidation (3).

H3AsO3 + MnO2 –> HAsO42- + Mn2+ + H2O

Figure 6. The reduction of arsenite by manganese dioxide (3).

This treatment could be applied to water filtration.  Bajpai and Chaudhuri constructed a home arsenic removal unit that was affordable and effective.  They made manganese dioxide coated sand and used it to efficiently remove arsenic from a water sample.  The unit cost approximately five U.S. dollars (4).

Resources

World Health Organization – Arsenic in drinking water

Wikipedia – Arsenic contamination of groundwater

Image Sources

Poisoned woman

Roxarsone

References

(1) Smedley, P. L. and Kinniburgh, D. G.  A review of the source, behaviour and distribution of arsenic in natural waters.  Applied Geochemistry. 2002, 17:517-568.

(2) World Health Organization.  Arsenic in drinking water. May 2001. Accessed October 26, 2008.

(3) Driehaus, W., Seith, R., and Jekel, M. Oxidation of arsenate(III) with manganese dioxides in water treatment.  Wat. Res. 1995, 29(1):297-305.

(4) Bajpai, S. and Chaudhuri, M.  Removal of arsenic from ground water by manganese dioxide – coated sand.  J. Envir. Eng.  1999, 125(8):782-784.

(5) Pearce, F.  Bangladesh’s arsenic poisoning: who is to blame?  The UNESCO CourierJanuary 2001. Accessed October 26, 2008.

(6) Smith, A. H., Lingas, E. O., and Rahman, M.  Contamination of drinking-water by arsenic in Bangladesh: a public health emergency.  Bulletin of the World Health Organization2000.  78(9):1093-1103.

(7) Vu, K. B., Kaminski, M. D., and Nunez, L.  Review of arsenic removal technologies for contaminated groundwater.  2003. Accessed October 26, 2008.

(8) Chakraborti, D. et al.  Characterization of arsenic based sediments in the Gangetic Delta of West Bengal, India.  Arsenic Exposure and Health Effects IV2001.  27-52.

(9)  Christen, K.  Chicken poop and arsenic.  Environmental Science and Technology Online. March 29, 2006. Accessed October 26, 2008.

(10) Delmarva Poultry Industry.  Delaware broiler chicken production. May 2008. Accessed October 26, 2008.

(11) Hileman, B.  Arsenic in chicken production. Chemical and Engineering News. April 7, 2007.  85(15):34-35. Accessed October 26, 2008. 

(12)  Stolz, J. F., Perera, E., et al.  Biotransformation of 3-nitro-4-hydroxybenzene arsonic acid (roxarsone) and release of inorganic arsenic by Clostridium species.  Environ. Sci. Technol. 2007, 41:818-823.

(13) Denver, J., Ator, S. W., et al.  Water quality in the Delmarva Peninsula, Delaware, Maryland, and Virginia, 1999-2001. U. S. Geological Survey. 2004. Accessed October 26, 2008.

(14) Pelton, T. Arsenic’s use in chicken feed troubles health advocates.  March 10, 2007. Accessed October 26, 2008.

(15) Something fishy about the Shenandoah River. March 2008. Accessed October 26, 2008.

Grandma smelled geranium,
Started feeling kind of bum.
Sure you guessed the trouble, right—
Grandma whiffed some Lewisite.

From How to Tell the Gases, by Major Fairfax Downey
(United States Field Artillery Branch) [1]

Introduction

The thought of chemicals scares a lot of people in today’s society.  Whether discussing household cleaners, metals found in soils and water, or chemical warfare, poisonous chemicals surround us in our every day lives.  WWI displayed the first mass use of toxic gases as a weapon of battle, leading to the loss of many lives as seen in Table 1.  These original toxic gases were synthesized using commercial products. For example, in 1917 mustard gas shells were introduced to the battlefield—leading to severe skin burns. [2] However, the signing of the Geneva Protocol after WWI detailed the permissible use of chemical weapons.  The protocol stated that the only lawful use of chemical weapons for a country was in the case of retaliation.  During WWII this agreement was upheld, yet it did not cease the production of these toxic gases. [2] Even though the number of chemical deaths associated with WWI was only 7.7% of the troops, this new concept of chemical warfare came as a shock to the world.

Table 1.  Deaths Associated with Gas in WWI [3, p.36]

Country

Numbers (tons)

Deaths

% Deaths

Russia (unreliable)

475,340

56,000

11.7

France

190,000

8,000

4.2

Italy (unreliable)

13,300

4,627

34.7

U.S.

70,752

1,421

2.0

British Commonwealth

180,983

6,062

3.3

Germany

78,663

2,280

2.9

Totals

1,009,038

78,390

7.7

Characteristics

Lewisite, the “dew of death”, methyl, or L are all used to refer to the extremely poisonous gas containing arsenic that could potentially be used in chemical warfare [1,4]. The name “dew of death” originated because the liquid chemical was planned to be released from airplanes in the night and would sprinkle to the ground causing a “dew” on the ground in the morning [1].  Lewisite was discovered in 1903 by a Catholic priest, Father Julius Aloysius Nieuwland, but was not introduced to the world; however, the molecule was rediscovered and characterized in 1918 for use as a chemical weapon [1].


Figure 1.  Reaction Scheme to Produce Lewisite[1]

Lewisite is an arsenic trichloride compound, C2H2AsCl3, an oily, colorless liquid, insoluble in water and smells like geraniums.  The potential effectiveness of Lewisite as a warfare agent is caused by the high boiling point, 374°F and the low freezing boiling point, 0.4°F; nevertheless it is difficult to work with due to it’s low solubility in water  [4].

Figure 2. Lewisite Structure[A]

Figure 3. Identification Poster for Lewisite (1941-1945)[B]

Compared to mustard gas, Lewisite is significantly more toxic because of the arsenic. Lewisite can be introduced into the body through inhalation, skin/eye contact or ingestion. [4] Inhalation of Lewisite causes burning in the lungs at a concentration of approximately 8 mg-min/m3, yet it is not until a 20 mg-min/m3 concentration that the nose senses the smell. [5] The lethal dose (LCt50) is 1,500 mg-min/m3.  Testing on dogs demonstrated the effects of Lewisite inhalation.  The first symptoms are watery eyes, running nose, and vomiting.  The conditions progress to extreme nasal congestion, intense coughing and usually death.  If death is avoided, at this point more severe symptoms are observed, including violent sneezing and continuous drainage finally leading to death. However, if the animal was still alive on the 5th day, it usually recovered by the 10th day. [5]

If Lewisite comes in contact with the skin, blistering occurs within 26-48 hours.   In low concentrations deep burns and skin cell death occurs; however in large concentrations (i.e 1/3 teaspoon for a 70 kilogram man) death can occur in 1 to 10 days. [5] The health problems associated with acute exposure to Lewisite are caused by the interactions with and inhibition of certain enzymes– pyruvic oxidase, alcohol dehydrogenase, and hexokinase. It is believed that the Lewisite interacts with thiol (-SH) groups; however “the exact mechanism by which the Lewisite damages cells is not known.” [4].

Figure 4. Blistering Caused by Poisonous Gases [C]

Sadly, there is no test available for Lewisite exposure.  The best available prediction is to test the arsenic concentration in urine. However, if exposure is known by the victim (i.e. the laboratory or during battle) there is one antidote, British Anti-Lewisite or Dimercaprol [4].  A key feature of 2,3 dimercaptopropanol in British Anti-Lewisite is a dithiol; it is this specific compound which has a greater affinity towards the arsenic than the enzymes otherwise affected, causing it to then be excreted. [1 pp. 78-81] Yet the treatment is not completely benign.  Rashes form on skin, blood pressures increases, vomiting, eye tearing, and drainage occur. [1]

Figure 5.  British Anti-Lewisite Compound: 2,3 dimercaptopropanol[D]

Lewisite in War

Lewisite’s main research development center was located on the campus of American University and the community of Spring Valley in Washington D.C. [6] It was here that the government founded the American University Experimental Station in 1918.  The poisonous gas production at this site was the 2nd largest in the world and it is estimated that approximately 1000-1200 different potential chemical weapons were produced there. [6] Not only were deadly chemicals produced at this location, but tests were also conducted to determine the effectiveness of the gases. When the plant was shut down in 1920, the remaining poisonous chemicals had to be disposed of, yet the question that is still debated today is whether it was done properly.  The regulations set by the army were that containers with poison had to be buried at least 3 to 3 ½ feet underground and could not be disposed of in water. [6] The issue of proper disposable of these poisons has surfaced multiple times. However, a true investigation on the issue was not completed until 1992 when a man paving broke a glass bottle and was rushed to the hospital because of eye pain and skin burns. After multiple similar events, the EPA reviewed soil samples and discovered that the soil concentration of 1,200 ppm for arsenic was 30 times greater than the emergency removal limit of 43 ppm. During the excavation, 600 items were discovered, “almost ½ of which were munitions and a small fraction of which still contained chemical agents. Some bombs were intact and still had fuses”. [6] This only one example of an area where history is beginning to come unraveled due to the contaminated from chemical warfare agents. There are as many as 9,184 Formerly Used Defense Sites known today. [6]


Figure 6.  Women Working in Chemical Production Plants[E]

It is also important to note that the United States is not the only country facing this problem. Chemical gases such as Lewisite were not widely used in WWII. There are two exceptions, by Japan on a Chinese battlefield and the other by the Germans in concentration camps. Major stockpiles were created in case of retaliation. [1, p. xxi] Countries with stockpiles include Germany, Italy, England, France and the Soviet Union. The Soviet Union had the largest amount of Lewisite because of their lack of nuclear weapons and their plan to use it as their primary defense.  Due to the large stockpiles, Russia has developed many problems with arsenic poisoning throughout the country. [1, p. xxii]

Table 2: Amount of Lewisite Produced by Countries [1, p. 119]

Country Tons of Lewisite
U.S.

20,150

Soviet Union

22,700-47,000

Japan

1,400

England, Iraq, North Korea, Italy

2,000

 

One needs to remember for the future that the actions with chemicals today can affect generations to come.  The production of Lewisite during the time of WWII, did not only affect the people in the past.  It is affecting the generations of today, with multiple causes of arsenic poisoning.

 

References

(1) Vilensky, Joel A., Richard Butler, and Pandy R. Sinish. Dew of Death : The Story of Lewisite, America’s World War I Weapon of Mass Destruction. New York: Indiana UP, 2005. vi+.

(2) Peak, John, ed. “Weapons of Mass Destruction (WMP): Chemical Weapons.” GlobalSecurity.org. 27 Apr. 2004. 17 Oct. 2008.

(3) Hammond, James W. Poison Gas : The Myths Versus Reality. New York: Greenwood Group, Incorporated, 1999. 36.

(4) Medical Management Guidelines for Blister Agents: Lewisite (L) (C2H2AsCl3) Mustard-Lewisite Mixture (HL). United States. Department of Health and Human Services. Agency for Toxic Substances & Disease Registry. 24 Sept. 2007. 17 Oct. 2008.

(5) Vilensky, Joel A., and Pandy R. Sinish. “Blisters as Weapons of War: The Vesicants of World War I.” Summer 2006. Chemical Heritage Fondation. 17 Oct. 2008.

(6) McLamb, Marguerite E. “From Death Valley to Spring Valley.” Sustainable Development Law & Policy III (2003): 3-6.

Image Credits

(A) Lewisite
(B) Lewisite Identification Poster
(C) Mustard Blistering
(D) Dimercaprol
(E) Women Working in Plants

Author: Evan Joslin

Disclaimer: This report was done for a case study in a class about Metals in Medicine on the effects of arsenic throughout the world; it is not being used as an act of terrorism. (EJ)


 

Bismuth

Bismuth is used to treat a range of ailments.  Most commonly, bismuth is used to help protect against gastric ulcers, as well as hydration therapies for young children suffering from severe diarrhea.  Although bismuth has been shown to be exceedingly useful in these capacities it has also been blamed for encephalopathy in adults.

Medicinal Uses

Bismuth Subsalicylate

Reactive Oxygen Species (ROS) such as sodium hydroxide and other caustic agents such as hydrochloric acid and ethanol have been shown to cause gastric lesions and injuries (1).  It is not known whether these injuries are directly caused by these harsh substances or if they retard the repair system in gastric mucosal cells (2).  The active ingredient in Pepto-BismolTM, Bismuth subsalicylate, has been shown to scavenge Reactive Oxygen Species and other caustic agents by decreasing damage to gastric mucosal cells (2).  A study by Bagchi et al. suggested that the subsalicylate functioned as a buffer against the pH fluctuation caused by hydrochloric acid and sodium hydroxide.  However, in another study bismuth is hypothesized to act as a binding factor in mucous thus making the mucous thicker and more effective at shielding the gastric lining from oxidative damage (3).

bismuth subsalicylate

Bismuth-mediated Hydration Therapies

Diarrhea kills five to ten million people annually worldwide (4).  Children are the hardest hit by diarrhea for they account for 4.6 million deaths per year (4).  Recently in an issue of the New England Journal of Medicine, Figueroa et al. reported that Bismuth Subsalicylate can be used as an agent for rehydration therapy for infants suffering from diarrhea.  The group found that there was a statistically significant retention of water and decrease of watery stool (5).  The mechanism of action is thought to be an antisecretory function against bacteria (6).

Ranitidine Bismuth Citrate

Helicobactor pylori has been indentified as  a leading cause of gastric ulcer disease.  Ranitidine bismuth citrate is administered in order to combat the bacteria.  Ranitidine has been linked to the non-competitive inhibition of Phospholipase A2 in Helicobactor Pylori.  The inhibition of this enzyme, which lyses endothelial cells in the stomach by cleaving phospholipids, prevents ulcer formation (7). Bismuth further prevents ulcer formation by binding mucous together and by inhibiting pepsin activity (7).

ranitidine

Adverse Side Effects

Encephalopathy

In the 1970’s and 1980’s there was a high incidence of encephalopathy traced to high levels of ingested bismuth.  Those affected suffered from headaches and eventually experienced difficulties walking and standing (8).  These cases were due to exceptionally high oral doses of Bismuth Subsalicylate administered to these patients.  These cases led to the heavy policing of bismuth containing medicines in Australia and other countries (9).  Although the mechanism of bismuth toxicity is not fully understood, it has been linked to its changes and alterations of neurons in the brain (10).

Bismuth Antidiarrheal Side Effects

With regard to the usage of bismuth as an antidiarrheal, the problems are two fold.  Reye’s Syndrome is characterized by symptoms ranging from mood irritability to coma and even death.  Although the cause and cure of this syndrome are unknown, a link between salicylic acids has been made.  This has been supported by epidemiological studies correlating aspirin usage and Reye’s Syndrome.  Correspondence to Figeueroa et al in the New England Journal of Medicine concerning their article brought up this important public health point.  It is known that children have higher uptake of nutrients in general.  Specifically, children also have an elevation in metal uptake versus that of adults (11).  Based on this knowledge it can be concluded that children would have an increased risk for elevated levels of Bismuth in the body.  This can potentially lead to encephalopathy which would certainly lead to other developmental problems.

Conclusion

Bismuth is certainly a cheap and effective way to treat gastric ulcers and to replenish water lost during diarrhea.  However, if excessive amounts of bismuth are introduced into the body serious and potentially deadly side effects can occur.  For this reason, patients should be vigilant about the warning signs of Bismuth toxicity and may want to talk with their doctor before adding any Bismuth containing products to their medicine regiment.

Resources

Pepto Bismol Website

Reye’s Syndrome

Worldwide Information about Diarrhea –World Health Organization

Chemical Information for Bismuth

Helicobactor Pyrlori

References

(1) van der Vliet, A., Bast, A. Role of reactive oxygen species in intestinal diseases. Free Radical Biol. Med. 12, 499-513 (1992).

(2) Bagchi, D et al. Mechanism of Gastroprotection by Bismuth Subsalicylate Against Chemically Induced Oxidative Stress in Cultured Human Gastric Mucosal Cells.  Dig .Dis. and Sci.  44, 12, 2419-2428 (1999).

(3) Tanaka, S., Guth, P. H., Carryl, O. R., Kaunitz, J. D.   Cytoprotective effect of  bismuth   subsalicylate  in indomethacin-treated rats is associated with enhanced mucus  bismuth  concentration.    Aliment. Pharm. and Ther. 11, 3,  605-612 (2003).

(4) Braun, S., Manhart, M., Balm, T., Bismuth Subsalicylate in the Treatment of Acute Diarrhea in Children: A Clinical Study, Pediatrics, 87, 18-27 (1991).

(5) Figueroa-Quintanilla D, Salazar-Lindo E, Sack RB, et al.  A controlled trial of bismuth subsalicylate in infants with acute watery diarrheal disease. N. Engl. J. Med.  328, 1653-1658 (1993).

(6) Ericsson CD, Evans DG.  Bismuth subsalicylate inhibits actrivity of crude toxin of Escherichia coli and Vibrio chloerae. J. Infect. Dis. 136, 693-696 (1977).

(7) Ottlecz, A.; Romero, J. J.; Lichtenberger, L. M.   Effect of  ranitidine bismuth  citrate on the phospholipase A2 activity of Naja naja venom and Helicobacter pylori: a biochemical analysis.  Alim. Pharm. and Ther. 13, 7,  875-881 (1999).

(8) Bruinink, A.; Reiser, P.; Mueller, M.; Gaehwiler, B. H.; Zbinden, G.   Neurotoxic effects of  bismuth  in vitro.  Tox. in Vitro  6, 4,  285-293  (1992).

(9) Gordon M F; Abrams R I; Rubin D B; Barr W B; Correa D D   Bismuth  subsalicylate  toxicity  as a cause of prolonged encephalopathy with myoclonus.    Move. Dis. 10, 2, 220-222 (1995).

(10) Abramson, J et al. Bismuth in Infants with Watery Diarrhea
N. Engl. J. Med. 329, 1742-1743 (1993).

(11) Patriarca, M.; Menditto, A.; Rossi, B.; Lyon, T. D. B.; Fell, G. S.   Environmental exposure to  metals  of newborns, infants and young  children.    Micro. Journal  67, 351-361  (2000).

Author: James East


 

 

A Tungsten Carbide and Cobalt Pulmonary Disease

In the early twentieth century, Germany developed a metal alloy that would be used in many different industries (1,2). Tungsten carbide (WC) and cobalt (Co) is an alloy that is hard enough to cut and polish many different metals, hard woods, and diamonds. The alloy is produced when tungsten carbide powder and cobalt powder are heated to approximately 1,500 oC under high-pressure (Figure 1). The resulting product is approximately 80% tungsten carbide and 10-20% cobalt and may contain other metals (3). The cobalt binds the tungsten carbide molecules together and makes the material very resistant and approximately as hard as a diamond (1). Because of this it is used in high-speed cutting, drilling, grinding, or polishing of hard materials (4).


Figure 1. The hard-metal production process (2)

Case Studies

Workers who are exposed to the powdered forms of tungsten carbide and cobalt (<10mm) are more susceptible to the disease because of the dust and aerosol particles in the air (2, 5). Occupational exposure to potentially dangerous substances is often of great concern to government organizations and university-based hospitals. The Center for Disease Control released an article in 1992 about a 35-year-old industrial plant worker who had been exposed to aerosolized tungsten carbide and cobalt powder. In 1989 he had reported 21 months of shortness of breath and a chest radiograph showed interstitial abnormalities. An open-lung biopsy showed interstitial fibrosis, many macrophages, and multinucleated giant cells in the alveolar spaces (air sacs in the lungs). Upon testing of the biopsy, no tungsten or cobalt was detected. A year before his biopsy, a supervisor at the same plant, who had worked in the same department, died of acute pulmonary fibrosis and was diagnosed with hard metal pulmonary disease. A biopsy a few years prior to his death had shown multinucleated cells, interstitial fibrosis, and macrophages that are consistent with multiple pulmonary diseases. They reexamined his biopsy and detected the presence of tungsten but not cobalt. After these two occurrences, OSHA investigated the airborne cobalt levels in the plant and found in one instance the levels at 90% of the OSHA PEL. The process of the metal coating was adjusted and radiographs were taken of the 40 metal-coating employees (5).

The first reported occurrence of a pulmonary disorder associated with hard metal production or use was in 1940 in Europe (1, 2, 5). Twenty-seven workers who had been exposed to hard metal dust were examined after working in a hard metal factory that had been in operation for two years. Radiographs of the chest were taken of each worker and eight of the workers showed reticular shadowing in areas of fine nodulation, which suggested the beginning of pneumoconiosis (inflammation followed by scaring of the lung tissue) (2, 6). Also, in 1951 two men who had worked with the powders for 10 and 30 years had radiographs taken of their chests. They had suffered from dyspnea and the latter died from cardiac failure due to emphysema and chronic bronchitis (2).

lungs

Figure 2. Radiographs of healthy lung (left), 13 year (middle) and 23 year (right) exposure.

Case studies of hard-metal workers were performed from the 1940s to 1960s. Radiographs were taken of each worker. Figure 2 is a radiograph of healthy lungs and figures 3 and 4 are radiographs of lungs from a man who mixed the powders for 13 years and sharpened for 23 years, respectively. The patient in Figure 2 center has heavy hilar (openings by which nerves, ducts, or blood vessels enter or exits organs) shadows and profuse micro-nodular opacities. Most of his problems occurred in the mid and lower zones of his lungs. Figure 2 right shows that the second patient has an enlarged heart and small nodules. Also, he had an increase in translucency at the right base with destruction of both costophrenic angles (where the diaphram meets the ribs). Both patients had an increase in linear markings (2). As can be seen from the case studies discussed and the radiographs of these two patients, hard-metal workers who are susceptible to this disease have a poor prognosis.

What are the Symptoms of Hard-metal Disease?

There are many signs/symptoms of hard-metal disease. A patient may have tightening of the chest, cough, clubbing, external dyspnea (shortness of breath), fatigue, the production of sputum, and weight loss. Once the patient experiences some of these symptoms and seeks the advice of a medical professional, a radiograph is usually taken of the lungs. The usual finding is interstitial patterns suggesting fibrosis (8).

What Signifies Hard-metal Disease?

Once a radiograph shows interstitial fibrosis, a biopsy is taken of the lungs. The presence of tungsten carbide is the first indicator. Cobalt is not normally present because of its high biological solubility (5). The biopsy usually shows the presence of macrphages and multinucleated giant cells (Figure 3) that have engulfed the surrounding cells (4, 8).


Figure 3. Multinuclear giant cells engulfing the cells nearby (4)

Will All Hard-metal Workers Develop the Disease?

No. There is no correlation between the lengths of time working in the industry as compared to the progression of the disease. Some workers are individually sensitive to the particles (3). Researchers have attempted to suggest a route or mechanism of cause but have not been entirely successful. Some have suggested an autoimmune disease (3). It is believed that cobalt in the presence of tungsten carbide is what exacerbates the disease. Tests have been performed on rats, guinea pigs, and mini-pigs and tungsten metal and carbide alone did not reproduce the same results as when cobalt was present. Most of the animals showed the presence of the multinucleated giant cells and macrophages but the research was incomplete and did not produce a proper mechanism by which the disease takes place.

A Hypothesis on the Effect of Cobalt in the Lungs

A Fenton-like reaction can occur that causes cobalt to replace ferrous ions for the transportation of oxygen. This can produce hydroxyl radicals, which are referred to as an activated oxygen species (AOS). When cobalt and tungsten carbide particles are in close contact, electrons are donated by cobalt to the surface of the tungsten carbide particle surface. These can then in turn reduce oxygen to generate an AOS. The oxidized cobalt then goes into solution. This provides an explanation as to the reason cobalt is usually not found in a biopsy. Lison et al. had not completed the research required to confirm this hypothesis. They also give this radical formation as a reason only 1-5% of the hard metal industry’s population actually develops the disease. If a worker does not have a strong antioxidant defense then they may not be able to neutralize the radicals before damage has occurred (9).

Treatment

Unfortunately no cure has been developed. Once diagnosed, cortisteroids are the form of treatment but usually do not reverse the effects of the disease. Once contracted, the prognosis is poor (3, 4).

Regulations

OSHA has set the limit of hard metal in the air at 50 mg/m3. This will hopefully prevent workers from becoming sensitive or protect those who have already become hypersensitive to the particles (5).

References

  1. Fischbein, A.; Luo, J. J.; Solomon, S. J.; Horowitz, s.; Hailoo, W.; Miller, A. Clinical findings among hard metal workers. Brit. J. Industr. Med. 1992, 49, 17-24.
  2. Bech, A. O.; Kipling, M. D.; Heather, J. C. Hard Metal Disease. Brit. J. Industr. Med. 1962, 19, 239-252.
  3. Ruediger, H. W. Hard Metal Particles and Lung Disease: Coincidence or Causality? Respiration 2000, 67, 137-138.
  4. Cleveland Clinic. Occupational Lung Diseases. (accessed 22 September 2008).
  5. Center for Disease Control. Pulmonary Fibrosis Associated with Occupational Exposure to Hard Metal at a Metal-Coating Plant—Connecticut, 1989. (accessed 22 September 2008).
  6. Aetna InteliHealth. Pneumoconiosis. (accessed 30 September 2008).
  7. HubPages. Air Purifiers: What are they? And do you need one? (accessed 29 September 2008).
  8. Haz-Map: Occupational Exposure to Hazardous Agents. Hard Metal Disease. (accessed 22 September 2008).
  9. Lison, D.; Lauwerys, R.; Demedts, M.; Nemery, B. Experimental research into the pathogenesis of cobalt/hard metal lung disease. Eur. Respir. J. 1996, 9, 1024-1028.

Author: Morgan Moyer


 

History of Copper and Wound Healing

Copper is an essential trace element which has been known to be in living tissue for more than 200 years. Even before it was know to play an integral role in the human body, ancient cultures, such as the Egyptians, used copper for water sterilization, headaches, trembling of the limbs (likely seizures or Parkinson’s like symptoms), burns, and itching (1).  Copper was also used in the Roman Empire to treat many ailments including intestinal worms, chronic ulcers, and ear infections (1).  The Aztecs and nomadic Mongolian tribes also used copper for medicinal purposes (1).  In the 19th century, copper’s medical potency was first observed during the outbreak of cholera, in Paris, in 1832, when copper workers were found to be immune to cholera (1).  Other early medicinal applications of copper typically involved the treatment of painful joints and muscles using copper bracelets, or copper-containing ointments.  In the 20th and 21st centuries, copper is found in dietary vitamins, and has been used to treat chronic wounds, tuberculosis, burns, rheumatic fever, rheumatoid arthritis, sciatica, seizures, and as a supplement for general disease prevention (1). Scientific studies have clearly defined a role for copper in the regulation of growth, development, and function in Human and animal bodies. Copper is utilized by almost every cell in the human body, resulting in the intracellular formation of copper-dependent enzymes, such as, cytochrome c oxidase (energy production),  superoxide dismutase (antioxidation), lysyl oxidase (crosslinking of elastin and collagen matrix),and dopamine beta hydroxylase (catecholamine formation) (2).

Copper’s Wound Healing Complex

In the 1970’s, scientists isolated a sequence of amino acids (glycyl-L-histidyl-L-lysine; GHK) which is a Cu binding peptide in humans (3) (see structure of GHK below).  GHK is a tripeptide with an affinity for copper (II) ions, that when bound, forms the GHK- Cu(II) complex.  Although used to treat a variety of modern day diseases, copper is especially known for its complex role in wound healing promotion. In human plasma and wound areas, GHK exists as a mixture of GHK and GHK-Cu(II). GHK has a high binding affinity for copper (II) (pK=16.2) (3). However, under physiological conditions, only about 5% to 20% of GHK molecules exist as GHK-Cu(II) complexes with copper (II) (3).

The structure of GHK is very similar to that of common drugs used to treat ulcers (3):
GHK

The structure of common anti-ulcer drugs (3):
antiulcer drugs

Structure of GHK-Cu(II) complex (7):

GHK Cu complex

Many Mechanisms of Action

There are four distinct but overlapping stages of wound healing: hemostasis, inflammation, proliferation, and remodeling (2).  Copper peptide activity begins almost immediately following injury, during the inflammatory stage.  Mast cells, which are immune cells located in the skin, secrete GHK which binds biologically available Cu(II), thereby increasing GHK-Cu(II) concentrations in the wound area.  In general, copper promotes healing via two pathways.  First, GHK-Cu(II) protects tissue by acting as an anti-inflammatory agent that limits oxidative damage after tissue injury and by suppressing local inflammatory signals (i.e. cytokine IL-1) (2).  Secondly, GHK-Cu(II) acts as an activator that signals the removal of damaged tissue and promotes insertion of healthy tissue (2).  Thus, copper is unusual in that it is an anti-inflammatory agent that unlike most anti-inflammatory agents (i.e. ibuprofen) does not disrupt wound healing and actually promotes cleaner, faster, and better wound closure.

There is a large data base of studies which supports the role of copper in wound healing in vivo and in vitro.  In dogs with paw pad injuries, local injections of tripeptide-copper in the first week following injury result in wounds which healed faster and have higher collagen contents than control animals (4).  The images below illustrate the increased presence and organization of fibroblasts (extracellular matrix/collagen producing cells) in rat tissue 14 days following GHK-Cu treatment (5).

Control tissue (5):

control tissue

GHK-Cu(II) treated tissue (5):

GHK Cu treated tissue

 

In vitro studies have begun to elucidate cellular mechanisms by which GHK-Cu(II) promotes wound healing (2, 6).  Keratinocytes are the major cell type of the epidermis.  Integrins are a family of cell surface receptors present on keratinocytes which allow for keratinocyte signaling and interaction with other cells and extracellular matrix.  Specifically, for wound healing, keratinocyte integrins facilitate cells to cell and cell to the extracellular matrix attachment necessary for epidermal repair.  Cellular studies have shown that copper can alter the complement of keratinocyte integrin expression during the re-epithelization and remodeling phases of healing (6).  It is thought that this is one of many mechanisms by which copper modulates healing signals to improve wound healing.  Although complex, it is clear that copper plays an essential role in cellular mechanisms of homeostasis and repair, and that wound treatments involving copper therapy can be effective in improving healing time and quality.

Note: Exposure to high concentrations of copper can cause illness.  For more information on the toxicology of Copper please click following link. http://www.atsdr.cdc.gov/toxprofiles/tp132.pdf

References

(1) Image source

(2) Image source

(3) Image source

(4) Swaim SF., Vaughn DM., Kincaid SA., Morrison NE., Murray SS., Woodhead MA., Hoffman CE., Wright JC., Kammerman JR. Effect of locally injected medications on the healing of pad wounds in dogs. Am Vet Res. 57 (3), 394-399 (1996).

(5) Maquart FX., Bellon G., Chaqour B., Wegrowski J., Patt LM., Trachy RE., Monboisse JC., Chastang. F, Birembaut P., Gillery P., Borel JP. In vivo stimulation of connective tissue accumulation by the tripeptide-copper complex glycyl-L-histidyl-L-lysine-Cu2+ in rat experimental wounds.  J Clin Invest92, 2368-2376 (1993).

(6) Tenaud I., Sainte-Marie I., Jumbou O., Litoux P., Dreno B. In vitro modulation of keratinocyte wound healing integrins by zinc, copper and manganese.  Br J Dermatol140, 26-34 (1999).

(7) Image source

Author: Rebecca Reddaway


 

What is Itai-itai Disease?

Itai-itai disease was the first documented occurrence of mass cadmium poisoning in the world.  It occurred in 1950 in Toyama Prefecture in Japan.  However, the first time the disease was reported here was in 1912.  Toyama Prefecture was at the time the leading industrial prefecture on the Japan sea coast.  Itai-itai disease literally translates to “ouch-ouch” disease, named for the painful screams of its victims.

Cadmium Poisoning

Cadmium poisoning is a serious example of the toxicity of some metals in the body.  In the body cadmium has no constructive function, meaning it serves no biological function except as a toxin.  Cadmium is highly toxic even at low doses.  Some of the effects of acute cadmium exposure are flu-like symptoms, fever, chills, muscle aches.  These flu-like symptoms are referred to as “The Cadmium Blues”.  More serious exposure to cadmium has much more detrimental effects.  Any significant amount of cadmium taken up by the body immediately poisons the liver and kidneys.  Proximal Renal Tubular Dysfunction occurs when significant amounts of cadmium are ingested, meaning the kidneys lose their ability to remove acid from the blood.  A side effect of this is Gout, most likely contributing to much of the pain endured by victims of Itai-itai.  The kidney damage caused by cadmium is irreversible.  Serious damage is also inflicted upon the bones in a victim of Itai-itai.  Cadmium poisoning leads to osteomalacia (softening of the bones) and osteoporosis (loss of bone mass and weakness).  In extreme cases of this a person with Itai-itai can sustain bone fractures from their body weight alone.  Cadmium is also a carcinogen.

Why Did Itai-itai Disease Even Occur?

Mining was prevalent in the Toyama Prefecture of Japan starting around the year 710.  After WWI, new mining technology arriving from Europe made the Kamioka Mines in Toyama among the most productive in the world.  Starting all the way back in 1910 cadmium was being released in significant quantities into the Jinzu River in Toyama.  This was a major problem because the cadmium in the water killed all the fish, not to mention it was the major source for irrigation for the surrounding paddy fields, as well as drinking water.  In 1912 the first documented case of the disease emerged.  This was only two years after the cadmium had shown up in large quantities in the river.  This means that there was a lot of cadmium present.

Finally to the Point – Pregnancy as it Relates to Cadmium Poisoning

It was reported that over 200 elderly women living in the Jinzu Valley in the 1940s being mothers of multiple children were disabled by the disease.  This was on top of a reported 65 deaths of women thanks to Itai-itai.  So what was going on?  Why were pregnant women at more of a risk for cadmium poisoning?

Metal Mimicry is the Answer

The key actually lies in a close relative of cadmium, namely in zinc.  Zinc and cadmium share an uptake pathway in the body.  So, cadmium which is very similar to zinc in reactivity, is taken up by the zinc uptake protein in the body unknowingly.  And it seems like it’s taken up in large quantities, very quickly.  Tests on zinc uptake in yeast cells show that the zinc uptake protein ZNT1 followed normal Michaelis-Menten kinetics when importing zinc.  However, cadmium did not.  The point at which the importing of cadmium started to reach Vmax was not found.  This means that the saturation point of the receptors that were shuttling cadmium into the cell was not in the same range as that for zinc, they were much higher.  So, it seems like the process of cadmium uptake is much quicker than that of zinc.

Figure 1.  The Michaelis-Menten Kinetics of the ZNT1 protein in Yeast when Importing Zinc and Cadmium

Zn Cd Protein Kinetics

From: Pence, N. S. et al. Proc. Natl. Acad. Sci. USA 97, 4956-60 (2000).

What Does This Have to do with Pregnant Women?

A normal adult woman needs 7.0 mg/day.  However, a pregnant woman needs even more than this, and its believed that the body accounts for this by increased uptake from the gut.  On top of that, women who are breast feeding need even more, closer to 9.0 to 13.0 mg/day.  In the first four months of breast feeding, the milk contains 2 mg of zinc and a woman needs and extra 6 mg/day of zinc to make up for this.

Thinking About the Normal Diet of a Woman in this Region

During the mid-1900s in a prefecture in Japan it would be pretty safe to assume these women ate a lot of rice.  But, this turns out to be a very bad idea.  The rice is mainly irrigated by water from the Jinzu River which we know has been pumped full of cadmium.  So, now we have rice that is normally depleted of zinc and saturated with cadmium.  This is not the best thing for a pregnant woman to be eating when their bodies are trying to uptake extra zinc.  But, how would they know?  Or better yet what could replace a staple food like rice if it were cut out of the diet?

The Big Picture

Now we know that :

  1. Pregnant women’s bodies increase their uptake of zinc through the gut
  2. A staple food of the region was literally saturated with cadmium
  3. Zinc and cadmium share an uptake pathway in the body

It’s not hard to see how so much cadmium got into the bodies of these women.  With cadmium and zinc sharing an uptake pathway, the body is fooled into taking up loads of cadmium that it believes is zinc.  On top of that these women probably weren’t getting enough zinc, and a zinc deficiency can lead to a 15 fold retention rate for cadmium.  So, at first glance it would seem there isn’t a lot of connection between pregnancy and cadmium poisoning.  But, as you can see that’s not the case.

References

Wikipedia.  (2007).  Itai-itai Disease.  Retrieved September 24, 2007.

Energy Citations Database.  (2001).  Document #5175630.  Retrieved September 20, 2007.

VEGSOC.  Zinc Information Sheet.  Retrieved September 20, 2007.

Kanazawa-Med.  Itai-itai Disease.  Retrieved September 23, 2007.

Wikipedia.  (2007).  Cadmium poisoning.  Retrieved September 24, 2007.

N. S. Pence, P. B. Larsen, S. D. Ebbs, D. L. D. Letham, M. M. Lasat, D. F. Garvin, D. Eide, L. V. Kochian.  (2000).  “The molecular physiology of heavy metal transport in the Zn/Cd hyperaccumulator Thlaspi caerulescens”  Plant Biology, 97(9), 4956-60.

Author: Jarrod Rasnake


 

What is Nuclear Medicine?

Nuclear medicine is a subdivision of medical imaging where the patient is injected with a radioactive material that highlights or treats a disease or deformity.  The radioactive material is usually referred to as a radiotracer or radiopharmaceutical.  There are three ways that the radiotracer can be sent to its destination and these include injection, ingestion of a pill, or inhalation of a gas depending on where the radiotracer is needed.   Once inside the body this chemical accumulates at its intended target and gives off pairs of gamma rays.  Two or three different devices then detect these gamma rays.  These include a gamma camera, a Positron Emission Tomography scanner, and/or probe.  In terms of diagnosing ailments, it can be useful in determining kidney function, heart blood flow and function, respiratory function, gallbladder inflammation, density and structure of bones, presence and location of cancerous tissue, location of an infection, measure thyroid function, abnormalities in the brain, and localization of lymph nodes before surgery.  The second function of nuclear medicine is as a therapeutic treatment for hyperthyroidism, transport of antibodies for certain forms lymphoma, blood disorders, and tumors that have metastasized to the bones.

How Does the PET Scan Work?

PET scanning is a branch of nuclear medicine that uses radiotracers that emit positrons in the form of decay.  A positron is a particle that has an opposite charge to an electron.  These positrons encounter electrons and the two collide, emitting a pair of gamma photons.  These photons travel in every direction together, which creates the 3-D architecture of the organ being observed.  The tube that the patient is laying in is lined with a material called a scintillator.  This material absorbs the pairs of gamma photons and converts the energy to longer wavelength fluorescence that is easier to detect and record.  There are several radionuclides used for PET scans and they include carbon-11, nitrogen-13, oxygen-15, and fluorine-18.  These radionuclides are typically bound to compound that is normally used by the body like glucose or a glucose analogue, water, ammonia, or to molecules that bind to receptors that are drug targets.  Once the radiotracer is taken into the body there is a waiting period so that the chemical can be concentrated at the target.

PET process

A PET Scan of the Brain

The most common radiotracer used for neurology is FDG.  FDG stands for fluorodeoxyglucose.  It is one of the glucose analogues that can be absorbed by live tissue.

fluorodeoxyglucose

Once absorbed by a cell, the FDG cannot be phosphorylated and undergo glycolysis, so this delay gives the FDG time to release the positrons needed to find its location.  This can show doctors where there is good and not so good uptake of glucose a.k.a. live/normal functioning tissue. One application is locating a focal point of seizures.  At the focal point there is a significant decrease of oxygen metabolism.  One of the uses of FDG is at the forefront of Alzheimer’s disease diagnoses.  In a brain that is riddled with Alzheimer’s like pathologies there will be a significant decrease in both glucose and oxygen metabolism together.  A third application of using the PET scan for neuroimaging is to obtain an image of receptor pools in the brain.  The radiotracer that is injected is a ligand for a certain neuroreceptor, so that it binds with the specified receptor in the brain.  Certain neurological diseases can be characterized by increased or decreased receptor pools, which can be visualized with a PET-scan.

PET brain image

References

Radiology Info PET

What is PET

Medical Imaging

Image Sources

Sagittal brain MRI

PET image

fluorodeoxyglucose

PET and bhcp

Author: Weston Andrews


 

Magnetic resonance imaging (MRI) has become one of the most powerful techniques to date in the fields of diagnostic medicine and biomedical research.(1) It is a medical imaging technique most commonly used in radiology to visualize the structure and function of the body and is especially useful in the areas of neurological, musculoskeletal, cardiovascular, and oncological (cancer) imaging.


Figure 1. A MRI Instrument

When coupled with the use of a contrast agent, the images provided by such a technique are greatly enhanced enabling for the much improved ability to distinguish from different tissue types.(2) The MRI contrast agents in clinical use today are predominantly gadolinium based, and consist of a central paramagnetic gadolinium (III) ion chelated to an 8-coordinate water soluble ligand and a water molecule, to form a 9-coordinate complex.(3) The geometries of such complexes are found to be tricapped trigonal prism (TTP) or capped square antiprism (CSAP) with the former TTP geometry being most favorable.(3) These geometries are depicted below in Figure 2.

Figure 2. 9-Coordinate Geometries Adopted by Gadolinium Contrast Agents (3)

Gadolinium is a lanthanide metal and has a silvery-white appearance with a metallic luster as shown in Figure 3. Gadolinium itself has no known biological role, but rather is used in research techniques with biological systems.(4)


Figure 3. Gadolinium Metal

Gadolinium(III) is a toxic heavy metal with seven unpaired electrons. It has a size comparable to Ca(II) which can lead to the disruption of crucial Ca(II)-required signaling in the body.(5) The key reason for the perfectly safe use of gadolinium (III) ions in contrast agents (even when administered on the gram scale) is attributed to the fact that the gadolinium ion is strongly bound within its ligand and possesses zero observable dissociation from its ligand within the body.(3) This strong binding exists despite the presence of numerous chelating substances in the body such as phosphates, citrates and transferrin which allows the gadolinium complexes to be excreted from the body with the gadolinium(III) ion still intact.(3) Some examples of MRI contrast agents are shown in Figure 4.(3)

MagnevistTM (1, Figure 4) was the first drug to be approved as a MRI contrast agent (in 1988) and is produced and marketed by Schering, Germany.(3) It contains a highly paramagnetic gadolinium(III) central atom chelated to a 8-coordinate diethylenetriamine pentaacetic acid (DTPA) ligand and to one water molecule to form a di-anionic complex. It is also marketed under the generic name of gadopentetate dimeglumine.


Figure 4. Various Gadolinium-containing MRI Contrast Agents

ProHanceTM (2, Figure 4) contains a gadolinium (III) ion chelated to an 8-coordinate 1,4,7,10-tetraazacyclododecane-1-(2-hydroxypropyl)4,7,10-triacetic acid (HP-DO3A) ligand and to a solvent water molecule to form a neutral complex. It is produced by Bracco, Italy and goes by the generic name of gadoteridol.(3) OptiMARKTM (3, Figure 4) is a drug in clinical trials for evaluation as a potential extracellular MRI contrast agent.(3) It is produced by Mallinckrodt, USA and also is makes use of a highly paramagnetic gadolinium(III) metal center. The gadolinium is bound to an 8-coordinate diethylenetriaminepentaacetic acid-N,N’-bis(methoxyethylamide) (DTPA-BMEA) ligand and one water molecule.

Since the approval of Magnevist in 1988 it has been projected that over 30 metric tons of gadolinium has been injected into millions of people around the world.(3) In 1999, about 30% of all MRI exams made use of gadolinium contrast agents and it has been predicted that this percentage has increased to near 50% to date.(7)

MRI evolves pulsing the target with radio frequency (RF) in the presence of an externally applied magnetic field to induce spin flip excitation of water molecule protons.(8) When the protons relax back to their ground state, energy is released back into the surroundings while being recorded, to eventually give rise to the MR image. There are three tissue parameters that determine the intensity (brightness) of the generated signal and thus image contrast between different tissue types namely:

  1. Proton density
  2. T1 (spin-lattice) relaxation time (a few seconds)
  3. T2 (spin-spin) relaxation time (a few hundred milliseconds) (8)

MRI scans that target T1 relaxation times are called T1-weighted scans and scans that target T2 relaxation times are called T2-weighted scans.(8) In T1 weighted scans, a short repetition time (time span between two successive excitations) of less than 600 ms is used to create contrast in the MR image.(8) Tissues having short T1 relaxation times are able to relax to the ground state to produce a signal but, tissues that have longer T1 relaxation times do not have time to relax between successive pulses and are kept in the excited state generating little signal. Tissues with short T1 relaxation times thus appear bright in the MR image in contrast to tissues with low T1 relaxation times.(8)

The T2-weighted scans make use of an echo time (period of time between excitation and measurement of MR signal).(8) T2 relaxation times are much shorter that T1 relaxation times. The larger the echo time the greater will be the image contrast. This contrast is achieved because tissue possessing short T2 relaxation times (shorter than echo time) fully relax before signal acquisition starts and tissue possessing longer T2 relaxation times (longer than echo time) are still relaxing at the comencment of signal acquisition and thus generates a signal.(8) Tissue having longer T2 relaxation times thus appear bright while tissues having short T2 relaxation times appear dark in the MR image.


Figure 5. Various MR Images

The gadolinium containing contrast agents such as those shown in Figure 4 reduce T1 and T2 relaxation times by roughly the same amount and are best visualized using T1-weighted scans.(3) Tissue containing higher concentrations of contrast agent will have heightened intensity in the MR images due to the decreased T1 relaxation time that it induces in the protons contained by surrounding water molecules. This also allows for the use of shorter repetition times as the contrast agent more rapidly restores the temporary excitation back to the ground state in between pulses.(7) These gadolinium(III) contrast agents function by rapidly transferring the excitation contained in the water protons to the highly paramagnetic gadolinium(III) central ion. The water exchange rate (1/τm, Figure 6) is thus of importance in MRI contrasting agents with high (1/τm) rates being desired. Other factors important to the efficiency of the contrast agents is the degree of molecular tumbling and number of inner sphere coordinated water molecules as shown in Figure 6. Increased contrast efficiency has been show when molecular tumbling is reduced (by increasing molecular weight) and when more inner sphere water molecules are present. Gadolinium(III) contrast agents rarely contain more than one inner sphere coordinated water molecule due to the compromised stabilty of the resulting complex with fewer bonds to the chelating ligand.(3)


Figure 6.  A Scheme Adapted from Raymond et al. (7) Highlighting the Factors that Affect the Efficiency of Contrast Agents

References

  1. Pierre, V. C.; Botta, M.; Aime, S.; Raymond, K. N. Inorg. Chem. 2006, 45, 8355-8364.
  2. Datta, A.; Hooker, J. M.; Botta, M.; Francis, M. B.; Aime, S.; Raymond, K. N. J. Am. Chem Soc. 2008, 130, 2546-2552.
  3. Caravan, P.; Ellison, J. F.; McMurry, T. J.; Lauffer, R. B. Chem. Rev. 1999, 99, 2293-2352.
  4. http://en.wikipedia.org/wiki/Gadolinium
  5. Cacheris, W. P.; Quay, S. C.; Rocklage, S. M. Magn. Reson. Imaging 1990, 8, 467-481.
  6. Jocher, C. J.;  Moore, E. G.; Xu, J.; Avedano, S.; Botta, M.; Aime, S.; Raymond, K. N. Inorg. Chem. 2007, 46, 9182-9191.
  7. Raymond, K. N.; Pierre, V. C. Bioconjugate Chem. 2005, 16, 3-8.
  8. Weishaupt, D.; Köchli, V. D.; Marincek, B. How Does MRI Work? Springer, 2003, 1-15.

Image Sources

MRI instrument
Gadolinium Metal
Various MR Images

Author:  Allan Prior


 

Rheumatoid arthritis is a chronic inflammatory autoimmune disorder that affects one to two percent of people worldwide, and five percent of women over the age of 55 (1).  Women between the ages of forty and sixty are most susceptible to the disease.  It is a painful, incurable disease, and can lead to total loss of joint use within ten years of onset (2).  Symptoms are pain and swelling of the affected joint, and over time the soft tissue surrounding the joint erodes away.  People with advanced rheumatoid arthritis often have deformed hands or feet due to uncontrollable hyperextention or hyperflexation.  The causes of rheumatoid arthritis are unknown.

joint erosion

Joint Erosion

Two classes of drugs are used to treat rheumatoid arthritis.  The first class of drugs, called “first-line” drugs, are non-steroidal anti-inflammatory drugs (NSAIDs), which include drugs such as aspirin and ibuprofen.  These drugs simply alleviate pain and swelling.  The second class of drugs, or the “second-line” drugs, are disease-modifying anti-rheumatic drugs (DMARDs), and include such drugs as methotrexate, D-penicillamine and various gold salts (3).  DMARDs “modify” rheumatoid arthritis, and are able to slow its advance, allowing patients to retain flexibility and motion for longer periods of time.  In general, these are dangerous drugs and are only prescribed for patients with severe rheumatoid arthritis who are not responding to other treatments.

rheumatoid arthritis hand

Rheumatoid Arthritis Hand

Gold salts have been used to treat rheumatoid arthritis since the early twentieth century.  They are immunosuppressants.  Unlike other DMARDs, gold salts have been known to reverse erosive damage (4).  Examples of gold salts are gold sodium thiomalate (“Myocrisin”), gold thioglucose (“Solganal”) and gold thiosulfate.  Gold sodium thiomalate is the only gold salt that is FDA approved.  Countries outside the United States use other gold salts as well.

The Immune System

Rheumatoid arthritis patients are interested in anything that reduces pain and swelling.  The fact that gold salts are also able to reverse erosive damage makes them especially attractive as a treatment option.

The one biochemical reaction that gold ion is known to have in the body is binding to sulfhydryl groups (-SH) and interfering with reactions that rely on these functional groups (5,6).  This gives gold salts the potential to affect a wide range of reactions.  Links have been found between gold salt therapy and reduced activity of certain areas of the immune system.  For instance, gold salts inhibit the activity of lysosomal enzymes, which are important for the action of phagocytotic cells (7).  In the immune response, infected cells are targeted and attacked by phagocytotic cells, which engulf the infected cells and digest them using lysosomal enzymes.  In another example, studies have found that patients with inflammatory autoimmune disorders have greater levels of “substance P,” a neuropeptide found around nerve cells.  Gold therapy reduces levels of substance P in the blood serum (8).  Several studies have found that gold therapy affects relative levels of prostaglandins (PGs) and leukotrienes.  Specifically, gold salts lower levels of PGF-2α and increases levels PGE2.  PGF-2α is involved in stimulating the release of lysosomal enzymes, while PGE2 inhibits lysosomal enzyme release (9).  By changing relative levels of these prostaglandins, gold salts affect the immune response.

side effects

In short, gold salts suppress the immune response in such a way that the effects of rheumatoid arthritis are greatly reduced and in some cases reversed, making them very attractive to patients.

Mitochondria and Oxidative Stress

Doctors want to prescribe medicine that will treat their patients as well as possible, as fast as possible and as safely as possible.  Gold therapy has the potential to lead to some serious side effects that make doctors leery to prescribe gold salts.

The ability of gold ion to bind to a thiol group allows it to bind to proteins in the mitochondrial membrane.  When this happens, the mitochondrial membrane becomes more permeable to positively charged ions.  This results in decoupling of the oxidative phosphorylation reaction that synthesizes adenosine triphosphate (ATP) (10).  Mitochondria rely on a charge gradient to catalyze ATP production, and when this gradient is disrupted, ATP synthesis is severely inhibited and leads to cell death.  In addition, studies have found that gold may be retained in mitochondria of liver, kidney and bone marrow cells, all areas where gold treatment are the most devastating (11).  The presence of any heavy metal in the liver or kidneys interferes with their filtration function, and can lead to damage that is shown by the secretion of proteins or blood in the urine.  The effect of gold on bone marrow and blood cells (blood cells originate in the bone marrow) are the most serious side effect of gold therapy.  Patients on gold therapy have lower levels of red blood cells, white blood cells and blood platelet cells.  In severe cases, aplastic anemia can occur, where normal bone marrow stem cells are being replaced by fat cells, and the body is physically unable to replenish the blood cells that it is losing (12).  If left untreated, this can lead to death.

oxidative decoupling

Oxidative Decoupling

Thus, while gold salts may have some very beneficial treatment effects, their side effects are common enough and serious enough that doctors shy away from prescribing them.

Drug Comparision

In recent years, gold salts have largely been replaced by other drugs for rheumatoid arthritis treatment, especially the anti-cancer drug methotrexate.  Methotrexate may not be a “better” treatment so much as it is a newer treatment whose track record is not long enough to be as bad as that of gold salts.

In terms of dosage and effect, methotrexate has an advantage over gold sodium thiomalate.  Methotrexate dosage is 7.5 mg per week, and improvement may be seen in three to six weeks (13).  Gold sodium thiomalate, on the other hand, has a dosage between 25 and 50 mg per week and it may take three to six months for improvement to be detected (14).  In addition, the half-life of gold in the body (3-27 days) is much longer than that of methotrexate (3-10 hours).  So less methotrexate is required for a faster response with less of a chance to cause adverse reactions than gold salts.

Where methotrexate loses out is in side effects and drug interactions.  Overall, methotrexate has fewer side effects than gold salts, and a lower percentage of patients are affected by these side effects.  However, where gold salt side effects are almost always completely reversible upon cessation of treatment, methotrexate side effects, especially lung and liver damage, are not and can cause health problems later. Also, methotrexate interacts with a wide range of other drugs, including NSAIDs used to treat pain, and cause unexpected adverse reactions.  It is never used in conjunction with other drugs.  Gold sodium thiomalate, however, interacts with very few drugs, the most dangerous being penicillamine, another DMARD.  It is always used as part of a drug regime.  Finally, on a more economic basis, gold sodium thiomalate is much cheaper than methotrexate, costing less than $50 for a solution of 10mg/mL (15).  Methotrexate costs between $200 and $300 for 10 mg (16).

drug interactions

Whether or not methotrexate or gold salts are better rheumatoid arthritis drugs is a decision that should be left up to experts; yet looking at the data available to the public, it is not obvious why methotrexate, which appears to be as dangerous a drug as most gold salts, is FDA-approved while most gold salts are not.  It is especially important to note that the mechanism of action in rheumatoid arthritis is unknown for both drugs.  Some patients might prefer to be more at risk for coming down with side effects if those side effects are completely reversible as they are with gold salts.  Doctors want safety and results and  methotrexate gives faster results with fewer side effects.  All drugs containing heavy metals are toxic due to the effects of heavy metals on the body.  They are all dangerous, yet if used correctly and with caution can have great benefit.  Gold salts for treatment of rheumatoid arthritis is a good example of how heavy metal drugs can be taken off the market for being dangerous while other equally dangerous drugs are left on for reasons that are not clear to the public.  Ultimately, while people with little knowledge of medicine should not necessarily have a hand in drug policy, it is important that they have access to information concerning treatment and drug options, and why one treatment is FDA-approved while another is not.

Resources

http://www.medicinenet.com

http://www.hopkins-arthritis.org

http://www.fda.gov

References

(1) Rheumatoid arthritis John Hopkins Arthritis Center.  Accessed November 19, 2007.

(2) “Rheumatoid Arthritis.”  Wikipedia.org.  Accessed November 1, 2007.

(3)  Rheumatoid arthritis MedicineNet.  Accessed November 19, 2007.

(4) Ward, J. R.  Role of Disease-Modifying Antirheumatic Drugs versus Cytotoxic Agents in the Therapy of Rheumatoid Arthritis.  The Amer. J. Med.  (1988) 85, 39-44.

(5) Westwick, W. J., Allsop, J. Watts, R. W. E.  The Effect of Gold Salts on the Biosynthesis of Uridine Nucleotides in Human Granulocytes.  Biochem. Pharmoc.  (1974) 23, 153-162.

(6) Abou-Khalil, W. H, Yunis, A. A., Abou-Khalil, S.  Discriminatory Effects of Gold Compounds and Carriers on Mitochondria Isolated from Different Tissues.  Biochem. Parmoc.  (1981) 30, 3181-3186.

(7) Westwick, W. J., Allsop, J. Watts, R. W. E.  The Effect of Gold Salts on the Biosynthesis of Uridine Nucleotides in Human Granulocytes.  Biochem. Pharmoc.  (1974) 23, 153-162.

(8) deMiguel, E., Arnalich, F., Tato, E., Vaszquez, J. J., Gijon-Banos, J., Hernanz, A.  The Effect of Gold Salts on Substance P Levels in Rheumatoid Arthritis.  Neurosci. Letters.  (1994) 174, 185-187.

(9) Stone, K. J., Mather, S. J., Gibson, P. P.  Selective Inhibition of Prostaglandin Biosynthesis by Gold Salts and Phenylbutazone.  Prostaglandins.  (1975) 10, 241-251.

(10) Abou-Khalil, W. H, Yunis, A. A., Abou-Khalil, S.  Discriminatory Effects of Gold Compounds and Carriers on Mitochondria Isolated from Different Tissues.  Biochem. Parmoc.  (1981) 30, 3181-3186.

(11) Abou-Khalil, W. H, Yunis, A. A., Abou-Khalil, S.  Discriminatory Effects of Gold Compounds and Carriers on Mitochondria Isolated from Different Tissues.  Biochem. Parmoc.  (1981) 30, 3181-3186.

(12) Rawson, N. S. B., Harding, S. R., Malcolm, E., Lueck, L.  Hospitalizations for Aplastic Anemia and Agranulocytosis in Saskatchewan:  Incidence and Associations with Antecedent Prescription Drug Use.  J. Clin. Epidem.  (1998) 51, 1343-1355.

(13) Methotrexate on drugs.com Accessed November 24, 2007.

(14)  Gold sodium thiomalate Accessed November 24, 2007.

(15) Gold sodium thiomalate Accessed November 25, 2007.

(16) Methotrexate Accessed November 25, 2007.

Image source

“Oxidative decoupling” from Voet, Donald; Voet, Judith G.; Pratt, Charlotte W. Fundamentals of Biochemistry; Life at the Molecular Level. Ch. 17:
Electron Transport and Oxidative Phosphorylation. Figure 17-20. John
Wiley & Sons, Inc. 2nd ed. 2006. p. 568.

Author: Megan Love Huffman


 

Cause of Iron Deficiency During Pregnancy

During pregnancy the volume of blood in maternal circulation must increase concordantly with the growth of the fetus and the need for a fetal circulation system (5). Between 600 mg and 1 g of iron is required to accommodate this change (2). As a consequence of this, iron is depleted from maternal resources (blood levels and stores) to provide sufficient levels for the fetus. Once the maternal sources have been depleted the mother may enter a state of iron deficiency that can progress to anemia.

Anemia

Iron deficiency is defined as “low serum ferritin and sparse or absent stainable iron in bone marrow” (1). Numerous studies have shown detrimental effects to mother and fetus when iron is deficient. These effects include but are not limited to: maternal and fetal morbidity, low birth weight, decrease in duration of pregnancy, lower Apgar scores during labor, and increased risk of cardiovascular disease in adulthood (1,2). The World Health Organization reports that 35% to 75% of pregnant women in developing countries and 18% of pregnant women in industrialized countries are anemic with approximately 43% of women being anemic before they become pregnant (9).

Iron Supplementation

Iron supplementation during pregnancy to prevent iron deficiency can be an effective means to prevent some of the previous mentioned problems (1). However it has been hard to quantify the benefits due to confounding factors such as maternal iron status prior to pregnancy (as the symptoms don’t show until late in pregnancy) and the number of previous pregnancies. The WHO recommends that pregnant women in areas with a “high prevalence of malnutrition” should be given iron supplementation at a dose of 60 mg per day for six months to prevent and treat iron deficiency anemia. WHO suggests in areas where the prevalence of anemia is more than 40% that supplementation continues for three months after the delivery of the child (9).

Mechanism of Iron Transfer from Mother to Fetus

Iron has to be transferred from the mother to the fetus via the placenta and this transfer can not be reversed (7). Iron bound to transferrin (Tf), an iron transport protein, as diferric transferrin is taken up in the maternal circulation and targeted to transferrin receptors (TfR) on the apical surface of the placental syncytiotrophoblast. The Tf (with iron) is taken into the cells and the iron is released resulting in the production of a maternal apotransferrin which is returned to maternal circulation. The iron released into the placental cells is captured by ferritin, an iron storage protein, and is transferred to available fetal apotransferrin and exits into the fetal circulation as holotransferrin (1).

iron transport

Regulation of Placental Iron Transfer

It is believed that rates of iron transfer from mother to fetus are controlled by the placenta. Studies have shown that when the fetus is removed but the placenta remains the amount of iron taken up from maternal circulation remains the same suggesting the presence of the fetus has little effect on iron uptake (8). It is thought that the placenta controls iron transfer by regulating the number of transferrin receptors on the materal circulation facing side of the placenta (7). Although it is known that the functions of the placenta are controlled by cytokines, the exact mechanism of how TfR increase is regulated is not known (5). TNF-alpha is a cytokine that is suspected to regulate the number of transferrin receptors on the maternal side of the placenta. Studies have shown that TNF-alpha can induce apoptosis of placental cells and high levels of TNF-alpha have been found in placentas of early to midstage failed pregnancies (5). Another study has shown that iron supplementation can increase TNF-alpha production while the presence of an iron chelator can decrease production of TNF-alpha (6). This suggests that TNF-alpha regulates TfR levels in counter-intuitive manner, wherein decreases in available iron decrease levels of TNF-alpha and increase levels of TfR. The exact mechanism by which this is accomplished remains unknown.

References

(1) Allen, L.H. Anemia and iron deficiency: effects on pregnancy outcome. The American Journal of Clinical Nutrition 71, 1280S-12841S (2000).

(2) Gambling, L., Danzeisen, R., Fosset, C., Ansersen, H.S., Dunford, S., Kaila, S., Srai, S., McArdle, H.J. Iron and copper interactions in development and the effct of pregnancy outcome. The Journal of Nutrition 133, 1554S-1556S (2003).

(3) Killip, S., Bennet, J.M., Chambers, M.D. Iron deficiency anemia. American Family Physician 75, 671-682 (2007).

(4) Lewis, R.M., Doherty, C.B., Burton, G.J., Hales, C.N. Effects of maternal iron restriction on placental vascularization in the rat. Placenta 22, 534-539 (2001).

(5) McArdle, H.J., Danzeisen, R., Fosset, C., Gambling, L. The role of placenta in iron transfer from mother to fetus and the relationship between iron status and fetal outcome. BioMetals 16, 161-167 (2003).

(6) Scaccabarozzi, A., Arosio, P., Weiss, G., Dongiovanni, P., Franzani, A.L., Mattiolo, M., Levi, S., Fiorello, G.F. Relationship between TNF-alpha and iron metabolism in differentiating human monocytic THP-1 cells. British Journal of Haematology 110, 978-984 (2000).

(7) Srai, S.K., Momford, A., McArdle, H.J. Iron transport across cell membranes: molecular understanding of duodenal and placental iron uptake. Best Practice and Research Clinical Haematology 15, 243-259 (2002).

(8) Wong, C.T., McArdle, H.J., Morgan, E.H. Effect or iron chelators on placental uptake and transfer of iron in rat. American Journal of Physiology15, 243-259 (1987).

(9) http://www.who.int/making_pregnancy_safer/en

Author: Sara Dudgeon


 

Candy: A Sweet Treat or Dangerous Trick

Every Halloween parents are warned of the possible dangers of the candy being passed out. From razor blades in the candied apples to ensuring all candy is properly sealed, parents have their hands full checking the candy. However, parents also have to worry about the contents of candy straight from the manufacturers. Unfortunately, children are receiving chocolate candy with varying levels of lead. But is this a problem since the FDA approves of 0.1 ppm maximum allowable lead levels. The limit was also just lowered in 2006 from 0.5 ppm, and is only limited to candies that are marketed towards children, which excludes dark chocolate and food products containing chocolate but not considered to have a child-based market. In addition, a distinct difference in manufacturing techniques, ingredients, and lead content exist between chocolates made in different countries.(1) Mexican chocolate, in particular, can prove to be extremely hazardous because of the extra ingredients added. This contributor overviews the differences in chocolates, the contamination routes, potential dangers lead consumption, and actions to take (focusing on danger of lead consumption).

It’s All an Illusion

The differences in domestic chocolate and Mexican chocolate are not so distinct that you can easily screen them from your children. However, key ingredients (not found in domestic chocolate) are used in Mexican chocolate, namely chili and tamarind. The tradition of chili powder use in the chocolate recipe dates back to 595 A.D, and is considered essential to the culinary culture.(2)

So why are chili and tamarind a problem? In addition to the potential contamination from the lead ink in the wrapper, the cocoa bean, and the manufacturing method, Mexican chocolate contains chilies that often are contaminated with lead. The peppers are grown in lead rich soil, and then dried for the use in chocolate. The drying of the peppers actually causes the lead to become more concentrated. A study by the Orange County Register (CA) was conducted on chili to determine the amount of lead content.(3) It was discovered that from soil samples, chili powder samples, to finished product samples that lead is a common factor. From a Morelia market sample (from a seller that sells to chocolate companies), lead was found to be 1.5 ppm in a chili powder commonly used in Mexican chocolate production (tamarind was also found to have 1.5 ppm from the same market). Throughout the process of testing the water, soil, chili powder, and final product lead contents were found as high at 5 ppm. In addition to traditional chocolate, powdered snack mix products are generally made in Mexico and contain chili.(1) Since dark chocolate is a great risk to young children and the elderly that consume it, but is not restricted in lead content like milk chocolate a case study is being conducted to determine the threats. It is known that dark chocolate contains double the lead content of milk chocolate, and thus is more of threat because it is not regulated.(4) Furthermore, some lead comes from chocolate liquor used to make chocolate (dark chocolate containing more than milk chocolate).(1)

Where Does it Come From?

In 2005, the chocolate industry ran a test of 137 samples from seven different milk chocolate products and 226 samples from nine dark chocolate products. From these tests they found that the milk chocolate contained up to 0.222 ppm and the dark chocolate as high as 0.275 ppm. So where is all the lead coming from? In March of 2006, Dagoba recalled several of their chocolate lines because of contamination from new solder in one of the grinding machines.(5) It is well documented that lead can come from production and procession, packaging, and storage.(6) Production and processing have been discussed, and consist of absorption from the soil and grinding or cutting contamination. In addition packaging accounts for some lead content since bright yellow and red dyes on candy wrappers may contain lead.(7) It is also important to note that the primary exposure to lead is through ingestion.

Dangers of Lead

Everyone is at risk for lead poisoning, but children and the elderly are affected more at lower levels than others. However, lead is an accumulative hazard and thus possesses a greater problem because of the multiple pathways into the body. A person has to no only ensure that their pipes and paint are lead-free, but everything you come in contact with. Also it is known that Americans eat on average twelve pounds of chocolate per year, with intense marketing toward children.(8) Children under the age of six are at higher risk than others because their brains and central nervous system are still developing.(9) Even low levels will: reduce IQ, cause learning disorders, stunted growth, impair hearing, and cause kidney damage.(9) It was also discovered that children with higher levels of lead were prone to injuries and falls.(5) Roughly 430,000 American children between the ages of 1 and 5 years old have level above 10 ugm/dg of blood-lead.(10) These children, due to the side effects, are a market of $43.4 billon dollars.(11) Lead is also ingested more frequently in larger qualities in children than adults, and children absorb more lead than do adults.(6) Lead exposure in childhood has also been linked to higher absenteeism in high school, lower class rank, poorer verbal skills, longer reaction time, and poor hand-eye coordination.6 Also, the uptake of lead is increased when certain other metals are present. It was found that there was a higher absorption of lead when calcium was ingested at the same time.(12) Also be warned that some vitamin supplements contain more than the bottle states, and thus the lead content (which should be zero) and the calcium content might be higher than suspected.(13) Thus, although it is thought that you are helping your child to grow with a one-a-day chewable you might be stunting their growth.

Overall Physiological Effects(6)

  • At high levels of lead ataxia, coma, convulsions, death, hyperirritability, and stupor can occur.
  • Also for adults (high levels) it is known to decrease libido, cause: depression, fatigue, forgetfulness, impaired concentration, impotence, and weakness. (To name a few).
  • Children with acute lead exposure can have renal effects that are reversible if treated.
  • Lead decreases the heme biosynthesis by inhibiting d-aminolevulinic acid dehydratase (ALAD) and ferrochelatase activity.
    1. Ferrochelatase catalyzes the insertion of iron into protoporphyrin IX.
    2. The decrease also leads to increases erythrocyte protoporphyrin (EP) which binds more with zinc than iron.
  • This heme synthesis pathway is used for neural, renal, endocrine, and hepatic pathways.
  • Lead exposure also impedes conversion of vitamin D into its hormonal form, which is responsible for extra- and intra-cellular calcium homeostasis (affecting bone growth).
  • Also been linked to hypertension, although many factors go into hypertension.

Symptoms(6)

  • Most people with lead exposure are asymptomatic.

A informative table can be found at http://www.atsdr.cdc.gov/csem/lead/pbcover_page2.html.

In addition to looking for signs, longbone radiographs are used to determine exposure. Below are two images of a five and three year old respectively with “lead lines”.(6)

lead lines 5
Five year old with “lead lines”

lead lines 3
Three-year old with “lead lines”

What to Do?

You should ensure that your environment has little to no lead; children are more susceptible to inhaling or eating lead from a house environment. Although it is unnecessary to exclude chocolate from the diet, it should be sparse and considered a treat.5 (5) Be aware of the unknown lead and higher calcium content in the vitamins you give your children. Also be aware of the symptoms and get the child check out immediately if you suspect any exposure. Be aware that you are at risk as well, and ensure you lead intake is not high. So this Halloween while checking your child’s candy, be sure to think about the amount of lead inherent in all those candies, and limit them to the pieces of candy that you will let them eat.

Resources

The History of Mexican Chocolate

Orange County Register “Toxic Treats”

Agency for Toxic Substances & Disease Registry

Limiting Lead Intake from Chocolate

References

(1) ElAmin, A. Regulator lowers limits of lead levels in children’s candy. Decision News Media SAS. (2006).

(2) DeWitt, D. Chiles and Chocolate. FieryFoods.com. <http://www.fiery-foods.com/dave/chilechoc.asp&gt;

(3) Godines, V. Tests suggest lead introduced in powder. Freedom Communications, Inc. (2005).

(4) Raloff, J. Leaden Chocolates. Science News. 168, No. 19 (2005).

(5) McRandle, P.W. Lighten Hearts. Green Guide. 118 (2007).

(6) Agency for Toxic Substances and Disease. Lead Toxicity. Case Studies in Environmental Medicine. 9-55 (2007).

(7) Mushak P, Davis JM, Crocetti AF, Grant LD. Prenatal and postnatal effects of low-level lead exposure: integrated summary of a report to the US Congress on childhood lead poisoning. Environmental Research 50:11-36 (1989).

(8) American Environmental Safety Institute. Lead in Chocolate: The impact on children’s health. American Environmental Safety Institute Fact Sheet. (2002).

(9) National Safety Council. Lead Poisoning. National Safety Council. (2004).

(10) EPA. 2004b. Measure B1: Lead in the blood of children. Washington, DC. <www.epa.gov/economics/children/body_burdens/b1.htm>

(11) Landrigan PJ, Schechter CB, Lipton JM, et al. Environmental pollutants and disease in American children: Estimates of morbidity, mortality, and costs for lead poisoning, asthma, cancer, and developmental disabilities. Environmental Health Perspective 110 (7):721–728 (2002).

(12) Yanez, L., Batres, L., Carrizales, L., Santoyo, M., Escalante, V., and Diaz-Barriaga, F. Toxicological assessment of azarcon, a lead salt used as a folk remedy in Mexico. I. Oral toxicity in rats. Journal of Ethnopharmacology. 41 91-97 (1994).

(13) Garcia-Rico, L., Leyva-Perez, J., and Jara-Marini, M.E. Content and daily intake of copper, zinc, lead, cadmium, and mercury from dietary supplements in Mexico. Food and Chemical Toxiciology. 45 1599-1605 (2007).

Author: Jessica Sheehan


 

Photodynamic therapy (PDT) is a technique that makes uses of light, a photosensitizer and molecular oxygen to orchestrate programmed cell death in various biological systems due to the resultant generation of highly reactive oxygen species (ROS).(1)

The principle of using light for the treatment of disease has been known for centuries and has been traced as far back as to the ancient Egyptians (about 4000 years ago) who successfully utilized light in conjunction with orally ingested Amni Majus plant in the treatment of vitiligo, a skin disorder of unknown cause.(2) In the late nineteenth century, Finsen demonstrated the successful use of heat filtered light from a carbon arc lamp in treating lupus vulgaris, a tubercular skin condition, which earned him a Nobel prize in physiology and medicine in 1903.(3) In the early twentieth century, the first account of PDT being used for treatment of solid tumors (cancer) was reported by the Von Tappeiner’s group in Munich.(4, 5) Much later, in the late 1980s, further studies culminated in the development of the photosensitizer Photofrin (Figure 1) which in 1993 was approved by the Canadian health agency for the treatment of bladder cancer and later in Japan, USA and some European countries for treatment of certain esophageal cancers and non-small cell lung cancers.(6)

photofrin

Figure 1. Structure of Photofrin, n = 1-9.
The structure of Photofrin has been found to be oligomeric in nature and consists of a number of porphyrin monomers, with each monomer comprising of a basic tetrapyrrolic porphine scaffold (four pyrrole sub-units interconnected by four methine groups).

The success of PDT treatment is attributed to the fact that it is highly selective, allowing for the targeting of cancerous tissue without the collateral damage that can be associated with other cancer chemotherapeutic agents. Based on the fact that diseased cancerous tissue is metabolically more active that healthy tissue, after administration, the photosensitizer readily accumulates in the rapidly dividing cancerous tissue.(1) Once the photosensitizer has accumulated in the cancerous tissue, it remains inactive until activated by an external source of light irradiation. Light of the correct wavelength is then directed at the tumor site electronically exciting the photosensitizer which subsequently transfers this excitation to molecular oxygen (triplet oxygen) contained within the tissue cells to form cytotoxic singlet oxygen, as well as other ROS.(7) It is these resulting products that attack cellular components such as DNA and proteins resulting in cell lysis and eventual death. An outline adapted from Josefsen and Boyle (6) showing this overall process is depicted below in Figure 2.

                               stages of photodynamic therapy

Figure 2. The Stages of Photodynamic Therapy

light therapy           pdt treatment
Figure 3. Applications of External Irradiation

The generation of singlet oxygen results from the reaction between the excited triplet state photosensitizer (3Psen*) and the ground state molecular oxygen (3O2) via a type II process which is a spin allowed transition. The triplet state photosensitizer is previously formed as a result of intersystem crossing (ISC) from the excited singlet state photosensitizer which forms as a result of the absorption of external irradiation. An energy diagram adapted from Josefsen and Boyle (6) that highlights the formation of singlet oxygen is depicted below in Figure 4.
jablonski diagram
Figure 4. Formation of Singlet Oxygen

As mentioned earlier, Photofin (Figure 1) was the first PDT photosensitizer to be approved for clinical use however it is far from ideal as it possesses prolonged patient photosensitivity resulting from poor clearance from the body, as well as poor long wavelength absorption. Ideally, photosensitizers must absorb at longer wavelengths such as in the red or near infrared region of the electromagnetic spectrum (Figure 5), as this allows for deeper tissue penetration.(8)
electromagnetic spectrum
Figure 5. Electromagnetic spectrum

Ever since the approval of Photfin, researchers from around the globe have been trying to develop new PDT photosensitizers.(6) A large number of metal-macrocycle complexes have more recently been developed for use as photosensitizers with varied photodynamic consequences. Two metal containing PDT photosensitizers of interest that will be briefly discussed in this report are Purlytin and Lutex.

Purlytin (Figure 6) is a drug marketed by Miravant Medical Technologies in Santa Barbra, California, USA. It has been successfully used in the treatment of non-malignant conditions of psoriasis and restenosis.(6) It has also undergone Phase II clinical trials in the USA for PDT treatment against cutaneous metastatic breast cancer and Kaposi’s sarcoma in HIV patients.(6)

purlytin
Figure 6. Structure of Purlytin

Purlytin contains a chlorin macrocyclic scaffold with a tin atom bound in the central cavity. The presence of the chelated tin atom has shown to alter the electronic nature of the chromophore causing a red shift (20-30 nm) in its absorption as compared to the metal-free chlorin. This gives rise to a photosensitizer with an absorption range of 650-680 nm in the electromagnetic spectrum.(6)

Lutex (Figure 7) is a drug marketed by Pharmacyclics, California, USA. Lutex has entered Phase II clinical trials in the USA for testing against breast cancer and malignant melanomas.(6) Lutex contains a texaphyrin scaffold with a centrally chelated lutetium atom in its ligation site. The texaphyrins are derivatives of porphyrins but instead possess a penta-aza central core. The following molecule was shown to have a maximum absorption in the 730-770 nm range of the electromagnetic spectrum which deems this drug to be a highly promising candidate for deeper PDT applications. Interestingly it has been shown that the presence of the central metal atom in the texaphyrin scaffolds plays a huge role in the photoactivation as the ligand derivatives alone show little absorption.(6)
lutex
Figure 7. Structure of Lutex
References

(1) Josefsen, L. B.; Boyle, R. W. “Photodynamic therapy: novel third-generation photosensitizers one step closer?” British Journal of Pharmacology 2008, 154, 1-3.
(2) Edelson, M. F. “Light-activated drugs.” Scientific American 1988, 259, 68–75.
(3) Bonnett, R “Photosensitizers of the porphyrin and phthalocyanine series for photodynamic therapy.” Chemical Society Reviews 1995, 24, 19–33.
(4) Sternberg, E. D.; Dolphin, D.; Brückner, C. “Porphyrinbased photosensitizers for use in photodynamic therapy.” Tetrahedron 1998, 54, 4151–4202.
(5) Allison, R. R.; Mota, H. C.; Sibata, C. H. “Clinical PD/PDT in North America: an historical review.” Photodiagnosis and Photodynamic Therapy 2004, 1, 263–277.
(6) Josefsen, L. B.; Boyle, R. W. “Photodynamic Therapy and the Development of Metal-Based Photosensitisers.” Metal Based Drugs 2008, 1-24.
(7) Farrer, N. J.; Sadler, P. J. Aust. J. Chem. 2008, 61, 669–674.
(8) Patrice, T. Photodynamic therapy Royal Society of Chemistry, 2004, 260.

Image Credits

Light application 1
Light application 2

Electromagnetic Spectrum

Author: Allan Prior


 

Manganism is a neurodegenerative disorder resulting from chronic expose to abnormally high levels of the essential element manganese. The symptoms very closely resemble those of Parkinson’s disease, including bradykinesia (slow movement), dystonia (sustained muscle contractions), and disturbance of gait. As one would expect (just as in Parkinson’s), manganese toxicity is most severe in the basal ganglia, which is responsible for initiating and modulating movements. However, manganese exposure affects a different part of the basal ganglia circuit. Parkinson’s disease results from the degeneration of dopaminergic axons projecting from the substantia nigra to the striatum. Dopamine normally provides an excitatory input to the caudate/putamen, the area that receives the majority of the motor information in the basal ganglia. The projections from the caudate/putamen to the globus pallidus are inhibitory, and as such this degeneration of the nigrostriatal pathway results in a decreased inhibition of neurons in the globus pallidus. The projections from the globus pallidus to the thalamus are also inhibitory, so decreased inhibition in the globus pallidus actually results in an increased inhibition to the thalamus. The signal from the thalamus to the primary motor cortex is thus reduced and results in a deficit in the initiation of movement. In contrast to Parkinson’s, Manganism symptoms result from the degeneration of inhibitory GABAergic input from the globus pallidus to the thalamus. This then results in an overall decreased inhibition of the thalamus, and thus thalamocortical signaling is increased. So the two diseases both result in similar movement disorders, but by affecting different parts of the same pathway, and producing opposite net changes in the balance of thalamic excitation and inhibition. (1)

mn basal ganglia

Manganism is caused primarily by the inhalation of trace amount of manganese rather than by absorption in the gastrointestinal tract (where it is relatively impermeable). Entry into the brain can occur in primarily via two pathways. First, it may be inhaled through the lungs and absorbed into the blood stream. From there it is able to cross the blood-brain barrier via specific transport proteins.(2) The other possible route is directly through the olfactory system. Manganese can be taken up by olfactory sensory neurons in the epithelium and actually transported trans-synaptically throughout the brain. These toxic inhalation routes make Manganism more prevalent among occupations such as welders and miners, where fumes containing MnO2 are especially concentrated. (3)

The actual mechanism by which manganese targets these individual neurons is not yet known. Even the cellular entry mechanisms are not fully understood. Two main pathways have been proposed: a transferrin-dependent and -independent route, both of which operate similarly to those for iron transport.(4,5)  The transferrin- dependent pathway involves the binding of Mn3+ to the tranferrin protein. This complex then binds to the tranferrin receptor, is invaginated by an endosome, and then dissociated and reduced to Mn2+ in the acidic environment. The transferrin-independent pathway can be mediated by a number of different channels such as the divalent metal transporter 1 (DMT1), a voltage-gated Ca2+ channel, or even the glutamate-gated n-methyl-d-aspartate (NMDA) receptor. (5)

Mn transport

Such mechanisms are most likely used in combination and might determine the differing susceptibility not only between cell types but also between individuals. These remaining questions will be the topics of much research to come.

Resources

Symptoms of manganism and FAQ

Metals and neurodegeneration

Manganism research: history, critique, and unanswered questions

References

(1) D. P. Perl and C. W. Olanow. The neuropathology of manganese-induced parkinsonism. J Neuropathol Exp Neurol 66 (8), 675 (2007).

(2) M. Aschner. Manganese: Brain transport and emerging research needs. Environ Health Perspect 108 Suppl 3, 429 (2000).

(3) A. B. Santamaria, C. A. Cushing, J. M. Antonini et al. State-of-the-science review: Does manganese exposure during welding pose a neurological risk? J Toxicol Environ Health B Crit Rev 10 (6), 417 (2007).

(4) J. A. Roth and M. D. Garrick. Iron interactions and other biological reactions mediating the physiological and toxic actions of manganese. Biochem Pharmacol 66 (1), 1 (2003).

(5) J. A. Roth. Homeostatic and toxic mechanisms regulating manganese uptake, retention, and elimination. Biol Res 39 (1), 45 (2006).

Author: James Corson


 

History and Background

Vaccines play an important role in maintaining the health of the American population.  Anne Schuchat, Director of the CDC’s Center for Immunization and Respiratory Diseases estimates that every year, vaccines prevent 33,000 deaths, 14 million infections, and saves the United States $34 billion in health care costs. (3)

vaccine syringe

Thimerosal is used in vaccines as an additive that prevents bacterial and fungal contamination and infection, especially from multi-dose containers which are prone to contamination. (10)  It has been present in vaccines since the 1930s and until 1999, approximately thirty vaccines in the United States contained Thimerosal.  Five important vaccines for infants contained thimerosal: diphtheria, tetanus, pertussis, Haemophilus influenzae (Hib), and Hepatitis B.  Chemically, thimerosal is fifty-percent mercury by weight.  In the body, thimerosal is metabolized to ethyl mercury (chemical formula: C2H5Hg+) and thiosalicylate.  Thimerosal is also known by its International Nonproprietary Name (INN) Thiomersal and its IUPAC name Ethyl(2-mercaptobenzoato-(2-)-O,S) mercurate(1-) sodium.  Its molecular formula is C9H9HgNaO2S. (11)  According to the FDA, “As a vaccine preservative, thimerosal is used in concentrations of 0.003% to 0.01%.  A vaccine containing 0.01% thimerosal as a preservative contains 50 micrograms of thimerosal per 0.5 ml dose or approximately 25 micrograms of mercury per 0.5 ml dose.” (10)

thimerosal chemical structure

In 1998, Andrew Wakefield, a gastroenterologist, published an article in Lancet that claimed to find a link between autism and the MMR (Measles, Mumps, Rubella) vaccine. (2)  Until that time, the main side affect associated with thimerosal was minor irritation at the injection site. (10)  This sparked debate within the medical community and in response, multiple agencies within the United States and abroad undertook research to see if there was a connection between thimerosal, ethylmercury, and Pervasive Developmental Disorders (under which autism falls).  In 1999, in response to concerns about information regarding the harmful affects of prenatal exposure to methyl mercury, the FDA realized that due to thimerosal, infants may have been exposed to a total amount of ethyl mercury that exceeded recommendations for methyl mercury. (10)  However, a major difference between ethyl and methylmercury is the fact that ethylmercury is excreted by the body at a faster rater. (7)  Ethylmercury has a much shorter half-life than methylmercury (less than one week vs. 1.5 months). (12)  That same year, in an attempt at a preemptive strike against a possible risk, the American Academy of Pediatrics and the United States Public Health Service recommended that thimerosal be eliminated as a preservative in vaccines. (10)  At that time, other countries such as Canada and Denmark, had already begun to phase out Thimerosal due to the emergence of newer vaccines that no longer required it or could be damaged by its presence. (2)

Further Research

In 2001, the Institute of Medicine (IOM) released the initial findings of their Immunization Safety Review Committee.   The committee found that at that time, there was not sufficient evidence to either accept or reject the link between thimerosal containing vaccines (TCVs) and autism and other disorders, but called the hypothesis “biologically plausible.”  In 2004, the IOM committee released its final report, which included information from studies performed in the United States, Denmark, Sweden, and the United Kingdom.  Their final conclusion was a “rejection of a casual relationship between thimerosal containing vaccines and autism,” as well as a “rejection of a casual relationship of the MMR vaccine and autism.” (6)  Also in 2004, ten of the thirteen authors that worked with Wakefield retracted their original conclusion that there was a casual link between MMR vaccines and autism.  This coincided with the release of information about a serious conflict of interest; in 1997, one year before Wakefield’s study was published, he applied for a patent for a vaccine to replace the current MMR one. (2)  In an article published in 2006 in Pediatrics, the official journal of the American Academy of Pediatrics, researchers studying children in Montreal stated that, “Thimerosal exposure was unrelated to the increasing trend in pervasive developmental disorder,” and “PDD rates were at their highest value in birth cohorts that were thimerosal free.” (5)

The Centers for Disease Control (CDC) recently released findings in September 2007 about TCVs and neuropsychological outcomes.  This research, which was published in The New England Journal of Medicine, looked at children ages seven to ten years old to determine if early thimerosal exposure was related to onset of neuropsychological symptoms.  According to Anne Schuchat, Director of the CDC’s Center for Immunization and Respiratory Diseases, “some of the results suggested that exposure to higher thimerosal quantities led to better performance. And some of the tests showed that exposure to higher thimerosal content led to worse performance.”  Taken together, the researchers determined that receiving TCVs poses no future risk to children.  At the same time however, the researchers observed a risk of higher incidence of tics in boys due to early thimerosal exposure.  However, the study was unable to conclusively link the two, and the CDC will be doing further research. (3)

Conclusion

In March 2000, a thimerosal free version of the Hepatitis B vaccine became available in the United States. (10)  Other childhood vaccines were quick to follow suit.  According to the Food and Drug Administration (FDA), “At present, all routinely recommended vaccines for U.S. infants are available only as thimerosal-free formulations or contain only trace amounts of thimerosal (≤ 1 than micrograms mercury per dose), with the exception of inactivated influenza vaccine. Inactivated influenza vaccine for pediatric use is available in a thimerosal-preservative containing formulation and in formulations that contain either no thimerosal or only a trace of thimerosal, but the latter is in more limited supply.” (10)  Thus, the presence of thimerosal in vaccines should not be considered a public health concern.  For more information about which vaccines previously contained thimerosal, visit the FDA’s website here.
In his articles, “True Believers: Why There’s no Dispelling the Myth that Vaccines Cause Autism” and “Thimerosal on Trial” Arthur Allen examines why the myth that TCVs cause autism continues to persist.  He argues that one of the keys to this controversy is the work of Dr. Mark Geier and his son, David Geier.  Both are supporters of the “vaccines cause autism” theory and have published several articles containing their findings.  The elder Geier has also claimed to have developed a treatment for autism.  (1, 2)  However, in a review of current research of the TCV link to autism published in 2004 in Pediatrics, researchers concluded that “the four epidemiologic studies that support an association between thimerosal exposure and NDDs including autism, all by the same authors and using overlapping data sets, contain critical methodologic flaws that render the data and their interpretation noncontributory.” (8)

Along with the Geiers, other “quack” healers have latched on to the theory that mercury causes autism, and have been using chelator therapies as a means to “cure” children of their autism. (1)  In August of 2005, a five year old boy died of an arrhythmia after receiving EDTA, a chelating agent, and it is estimated that about 10,000 autistic children in the United States receive mercury-chelating agents every year.  Currently, 4800 cases of children whose parents claim their autism was caused by vaccines have been brought before the Vaccine Injury Compensation Program. (7)  Thus, even though all major governmental regulatory agencies, including the CDC and the FDA have rejected the claim that thimerosal exposure can be linked to disorders such as autism, public opinion on this matter remains divided.

More Information

National Network for Immunization Information

Food and Drug Administration: Thimerosal

Centers for Disease Control: Thimerosal

References

(1) Allen, A.  Thimerosal on Trial.  Slatehttp://www.slate.com/id/2166939/ (2007).

(2) Allen, A.  True Believers: Why There’s no Dispelling the Myth that Vaccines Cause Autism.  Slatehttp://www.slate.com/id/2169459/ (2007).

(3) Centers for Disease Control and Prevention: New England Journal of Medicine Telebriefing.  Early Thimerosal Exposure and Neuropsychological Outcomes at 7 to 10 Years.  http://www.cdc.gov/od/oc/media/transcripts/2007/t070926.htm (2007).

(4) Centers for Disease Control (CDC).  Recommendations Regarding the Use of Vaccines That Contain Thimerosal as a Preservative.  MMWR. Morb. Mortal Wkly. Rep.  48, 996-998 (1999).

(5) Fombonne, E. et al.  Pervasive Developmental Disorders in Montreal, Quebec, Canada: Prevalence and Links With Immunizations Pediatrics 118, 139-150 (2006).

(6) Institute of Medicine.  Immunization Safety Review: Vaccines and Autism. http://books.nap.edu/openbook.php?record_id=10997&page=1 (2007)

(7) Offit, P. A.  Thimerosal and Vaccines – A Cautionary Tale.  N. Engl. J. Med.  357, 1278-1279 (2007).

(8) Parker, S. K., Schwartz, B., Todd, J., Pickering, L.K.  Thimerosal-Containing Vaccines and Autistic Spectrum Disorder: A Critical Review of Published Original Data.  Pediatrics.  114, 793-804 (2004).

(9) Thompson, W.W. et al.  Early Thimerosal Exposure and Neuropsychological Outcomes at 7 to 10 Years.  N. Engl. J. Med.  357, 1281-1292 (2007).

(10) U.S. Food and Drug Administration (FDA).  Thimerosal in Vaccines.  http://www.fda.gov/cber/vaccine/thimerosal.htm (2007).

(11) Wikipedia.  Thiomersal.  http://en.wikipedia.org/wiki/Thiomersal (2007).

(12) World Health Organization: Global Advisory Committee on Vaccine Safety.  (2007).

Author: Sarah Kleinfeld


 

Overview

Molybdenum, like many other micronutrients, comprises very little the dry weight of plants and animals (Figure 1). However, if it is not found in the correct amount in the diet it can have devastating consequences.  Linxian is a region in the Henan province in China that has seen staggering rates of gastroesophageal cancer in recents years.  The high rate is believed to be due to a lack of a number of micronutrients in the soil of that region, most notably molybdenum, which forces plants to produce cancer causing agents called nitrosamines.  This nitrosamine production may lead to the oxidative damage of cells during the ingestion of plant materials, which is always a risk factor for future carcinoma.  When antioxidants are administered in high doses the occurance of cancer decreased noticeably. This experimental evidence supports the oxidative damage hypothesis.

molybdenum metal

Figure 1 Molybdenum Metal

Molybdenum Uptake

It has been shown that the only way for plants to take up molybdenum is in its anionic form.(1)  Molybdenum is found in nature in three major forms.  The most prevalent form is found as MoS2 in Molybdenite, followed by Fe[Mo(O)4] (Wulfenite) and Ca[Mo(O)4] (Powellite).  It is hypothesized that these oxy anions are the anionic species of molybdenum take up in plants.  Although eukaryotic uptake mechanisms are very poorly understood there are molybdenum transportors found as sulfur/molybdenum cotransportors(2) and phosphate/molybdenum cotransportors in tomato plants(3) and also as distinct molybdenum transportors found in the green algae Chlamydomonas reinhardtii.(4) The structure of a bacterial molybdenum storage protein that can hold ~100 Mo atoms is shown below in Figure 2.(5)

Mo storage protein

Figure 2 Bacterial Molybdenum/Tungsten Storage Protein (5)

Plants have two ways of converting nitrogen into the biologically useful ammonia.  The first is the fixation and conversion of elemental Nitrogen, N2.  The second is converting nitrate into ammonia.(6)  This ammonia is eventually carried on to more useful metabolites, including amino acids.  In most plants the nitrate reductase electron flow is passed through Nicatinamide Dinucleotide Phophate in its reduced form (NADPH + H+).  From there the electrons are passed to Flavin Adenine Dinucleotide (FAD) to cytochrome 557 to a molybdenum complex (6,7).  This bidentate diothiol metal-ligand molybdenum complex is called molybdopterin (Figure 3).  After nitrite is produced as the product, it moves on to other nitrite reductase enzymes in the plant to finally form ammonia.

Mo Pterin

Figure 3 Molybdopterin

Gastroesophageal Cancer in Linxian Region

Linxian is a region in northern China that has seen esophageal and stomach cancer rates that are 10 times higher than the Chinese average and over 100 times higher than the average in the US. (8) It has been hypothesized that the cause of these cancers are due to the high levels of nitrosoamines (Figure 4) found in the systems and diets of the population in Linxian.(9)  The sources of these nitrosamines are from plants with a number of vitamin and micronutrient deficiencies, most notably molybdenum deficiencies.(8)  It is thought that when there is deficiency of molybdenum in plants, nitrosamines are produced due to the fact that nitrate reductase is not able to perform properly without its molybdenum cofactor.  When this happens the plant uses other mechanisms to catabolize nitrate.  These compensatory mechanisms produce the cancer-causing nitrosamines.(8)  The mechanism of cargenogenesis is still poorly understood but it is hypothesized that it operates by some form of oxidative damage.  This theory is supported by the fact that when the diets of the population in Linxian was supplemented with antioxidants Vitamin A and Vitamin E the incidence of cancer over a five year period went down.(9)

nitrosamine

Figure 4 Nitrosamine

Molybdenum Mineral Water

Molybdenum even appears as a supplement in mineral water and can be purchased from Eniva Corporation (Figure 5).

Mo mineral water

Figure 5 Eniva Molybdenum Mineral Water

Conclusion

Although the molar quantity of molybdenum is minute in most living systems, as it is for most other inorganic micronutrients, it has been shown to still be a vital component to the balance of an ecosystem.   These types of studies show that in order to diagnose and treat human disease we need a healthy understanding of many different scientific disciplines.

Resources

Chemical Properties

Dental Problems and Diet

Copper Antagonism of Molybdenum uptake

Recommended Daily Values for Molybdenum

References

(1) Mendel, R. “Biology of the Molybdenum Cofactor”, J. of Exp. Bot., 9, 2007; 2289-2296.

(2) Alhendawi RA, Kirkby EA, Pilbeam DJ. Evidence that sulfur deficiency enhances molybdenum transport in xylem sap of tomato plants. J. Plant Nut., 28, 1347–1353 (2005).

(3) Heuwinkel H, Kirkby EA, Le Bot J, Marschner H.,  Phosphorus deficiency enhances molybdenum uptake by tomato plants. J. Plant Nut,  15 , 549–568 (1992).

(4) Llamas A, Kalakoutskii KL, Fernandez E., Molybdenum cofactor amounts in Chlamydomonas reinhardtii depend on the Nit5 gene function related to molybdate transport. Plant, Cell Envron.,  23, 1247–1255 (2000).

(5) From: Schemberg, J. et al, Angew. Chem. Int. Ed., 2007, DOI: 10.1002/ange.200604858.

(6) Cowan, J.A., Inorganic Biochemistry : An Introduction, (Canada, Wiley-VCH, 1997).

(7) Kleinhofs A, Warner RL, Melzer JM,  Plant Nitrogen Metabolism , Recent Advances in Phytochemistry, Vol 23, Plenum Press, New York, pp. 117-155, (1989).

(8) Higdon, J., An Evidence Based Approach to Vitamins and Minerals (New York, Thieme, 2003).

(9) Chung, Y., Vitamin Nutrion and Gastroesophageal Cancer, J. Nutr., 338S-339S (2000).

Author: James East


 

In March of 2007 CNW Marketing Research, Inc published their independent research data in a paper titled, “Dust to Dust: The Energy Cost of New Vehicles from Concept to Disposal (8).”  In the more than 450 page document, CNW argues that hybrid vehicles are not only not more energy efficient as conventional vehicles, but that they are actually less energy efficient and more environmentally detrimental than the largest of all commercial vehicles, the General Motor Hummer (8).  Following a frenzy of media attention, CNW’s report was quickly met with opposition from the scientific community for its complete lack of references cited (not a single reference for the entire document), peer review, and transparency of methods (1).   Not only did CNW’s report lack visible scientific proof to support their claims, the report went against all published peer review data known on the efficiency of hybrid vehicles (1).

Ninety percent of energy required for automobile production and operation is consumed during the vehicle operation (1).  Only about 10% of the total energy goes towards manufacturing the vehicle and all of its parts.  Thus, it follows that the best way to improve the energy efficiency of vehicles is to reduce the amount of energy consumed during operation and this is exactly what hybrid vehicles do.  Using hybrid technology, vehicles, such as the Toyota Prius, employ rechargeable nickel metal hydride (NiMH) batteries to lower gasoline consumption thereby increasing energy efficiency.  The vast majority of pollution which comes from vehicles occurs during vehicle operation (85%) and not during manufacturing (10%) or disposal (5%).  Hybrid vehicles reduce the main sources of operational environmental pollutants by 1) reducing the amount of gasoline burned, thus reducing exhaust Carbon Monoxide (CO), Nitrogen Oxides (NO), and Sulphur Dioxide (SO2) and, 2) replacing toxic Lead-Acid batteries with long lasting, rechargeable, environmentally friendly NiMH batteries.

mining materials in vehicles

Mining matrials used in vehicle production: http://www.mnh.si.edu/earth/text/3_3_2_1.html

Lead is a highly toxic metal, which can lead to brain and kidney damage, hearing impairment, and learning and behavioral problems in humans (7).  The average 3000 lb car contains 27 lbs of lead and 95% of this lead is in the battery (4).  Thus by replacing lead-acid based batteries with NiMH batteries the amount of toxic lead in a car is reduced by 95%.  In 2000, Americans used >2 million tons of lead.  Automobiles are responsible for 50% of the 2 million tons and more than 90% of auto lead usage was for batteries.  If the American auto industry no longer used lead-acid batteries we could reduce the countries annual use of toxic lead by 900,000 million tons thereby reducing environmental pollution from the mining process and lead disposal.  Although lead batteries are the most recycled product known, 42,000 tons of lead ends up in land fills and thousands of more tons fall to the sides of our roadways and into our water in year.

car graveyard

Car graveyard: http://www.cleancarcampaign.org/GettingLeadOut.pdf

The pure nickel used in NiMH batteries is not considered toxic (9, 10) and there are recycling protocols in place to insure all NiMH battery components can be reused (2, 5).  Recycled nickel is mostly used for stainless steel production.  Like all metal mining and refining processes, nickel production is not a totally environmentally friendly process (3, 6, 13).  Flash Smelting is the most common form of nickel processing used today and the negative byproducts include SO2 and toxic nickel carbonyl gas.  However, since the 1970’s technologically advanced decomposer towers are used in smelting plants to break down toxic gases preventing their release into the environment (13). Currently the percentage of nickel used to produce vehicle battery packs accounts for only a  small fraction of the total amount of nickel produced by nickel plants world wide.  For example, Toyota purchases the nickel need for its hybrid battery packs from Canada’s INCO Mines located in Greater Sudbury, Ontario.  In 2004, INCO refined 241 million lbs of nickel and less than 1000 tons was purchased by Toyota to produce NiMH batteries.  Although the amount if Nickel used by the automobile industry would rise if all vehicles used NiMH batteries, NiMH batteries are designed (and protected under warranty) to last the life of a vehicle (11, 12), and because of their longevity fewer batteries are required per automobile when compared to traditional toxic lead-acid batteries.

ni foam

Nickel foam image: Paserin V., Marcuson S., Shu J., and Wilkinson D. CDV Technique for Inco Nickel Foam Production. Advanced Engineering Materials. 6,454-459 (2004).

Although pure nickel itself is not toxic, when bound to other elements nickel complexes can be highly toxic especially with long term contact expose in humans (14).  Additionally, about 5% of people express a mild allergy to nickel which results in redness, mild swelling, and irritation with dermatological exposure to nickel, most often from nickel containing jewelry.

Resources

For more on nickel allergy:

For more information on nickel:

References

(1) Hummer versus Prius: “Dust to Dust” Report Misleads Public with Bad Science

(2) http://www.buchmann.ca/Article16-Page1.asp

(3) http://www.greencarcongress.com/2005/07/new_nickel_foam.html

(4) http://www.mnh.si.edu/earth/text/3_3_2_1.html

(5) http://www.toyota.co.jp/en/environment/recycle/battery/index.html

(6) Acid Mine Drainage at the Nickel Rim Mine Tailings

(7) http://www.cleancarcampaign.org/GettingLeadOut.pdf

(8) CNW’s ‘Dust to Dust’ Automotive Energy Report (See “Dust.PDF” link)

(10) http://data.energizer.com/PDFs/nickelmetalhydride_appman.pdf

(11) http://data.energizer.com/PDFs/NiMH_disp.pdf

(12) http://www.toyota.com/html/hybridsynergyview/2006/fall/battery.html

(13) http://www.hybridexperience.ca/Toyota_Prius.htm

(14) Nickel Smelting and Refining

(15) http://www.inchem.org/documents/ehc/ehc/ehc108.htm

Author: Rebecca Reddaway


 

Cisplatin is a transition metal complex originally reported in the mid 19th century.  Further studies 100 years later by Barnett Rosenberg showed that platinum was an extremely useful tool, killing cancer cells in human tumor cell lines.  This discovery led to the widespread use of cisplatin as a cancer drug which is in continued use even today.  By impairing a cell’s ability to repair nuclear DNA damage, cisplatin is able to initiate apoptosis and kill tumor cells.  In addition to its efficacy against cancer it is also an extremely cost effective strategy for treating tumors.  Cisplatin is a wonder drug has changed the face of cancer research and treatment for the past 50 years.

cisplatin structure

Discovery

Barrett Rosenberg was the first scientist to discover the utility of cisplatin, also known as cis-diamminedichloroplatinum (1).  His initial research dealt with bacterial growth and its relationship to electric fields (2).  What Rosenberg observed was a 300 fold increase in the size of the bacteria (Figure 1).  He attributed this to the fact that somehow the platinum conducting plates were inducing cell growth but inhibiting cell division.  It was later deduced that the platinum species responsible for this was cisplatin.  Rosenberg hypothesized that if cisplatin could inhibit bacterial cell division it could also stop tumor cell growth.  This conjecture has proven correct and has led to the implementation of cisplatin as a cancer therapy for the past 30 years.

e coli  cisplatin e coli

Figure 1. Phase contrast photomicrograph of normal E. coli (left) and E coli in 8 ppm cisplatin (right).

Mechanism of Action

The survival of a cell is constantly in danger due to DNA damage from outside and from the incorrect incorporation of base pairs during replication.  Two mechanisms exist in order to fix these problems.  Cisplatin takes advantage of these repair mechanisms to effect programmed cell death, namely apoptosis.

Nuclear Excision Repair
Nuclear Excision Repair (NER) is the cell response to outside damage caused by ultraviolet rays from the sun and other types of chemical alteration to the DNA (3). Alterations included thymine dimers and cyclobutyl dimers.  NER enzymes take out a 24-30 base pair section of the DNA containing the damaged nucleotide and a polymerase fills in the excised portion.

Mismatch Repair
Mismatch repair is employed when base pairs are added incorrectly due to replication (4).  In a separate mechanism from NER the mismatch repair enzymes replace the base pairs and restore normal DNA composition to the cell.

Cisplatin and its Role in Programmed Cell Death
When cisplatin is introduced into the nucleus it forms adducts to the purine bases, Adenine and Guanine (5) (Figure 2).  This causes torsional strain on the DNA strand and recruits the Nuclear Excision Repair of Mismatch repair enzymes to fix the nuclear stress.  Since the platinum adduct is a non-native structure, the repair mechanisms cannot effectively remove the damage.  The DNA is permanently damaged and the cell could potentially be defective: thus, the cell undergoes apoptosis and sacrifices the cell for the greater good of the organism (6).

 

cisplatin and DNA

Figure 2. Crystal structure of platinum coordinated DNA.

Cost of Cisplatin

It has been demonstrated that the average age in the United States (7) and in most of the world is increasing due to advances in health care and technology.  However, with this increase in longevity come new health problems.  Cancer is said to be a disease of the old.  The longer an organism lives the more genetic mutations and damage it accumulates.  This often leads to cancer in later stages in life.  Thus, it is easy to see a need for safe, efficacious and cost effective therapies in order to treat cancer in a larger population of older people.  The ability of governments to lower costs of cancer therapies will allow treatment of a larger number of people and in this way, governments can better fulfill their job as a social welfare entity.

Financial examinations of cisplatin are varied and there are different boundary conditions when studying the cost of cisplatin treatment such as length of care and how many times therapy needs to be used before the tumor is completely gone.  Initially cisplatin was the sole medicine given during a cancer treatment but modern protocols call for co-adminstration with other cancer fighting drugs.  Typically cisplatin is administered with Paclitaxel (Figure 3) which functions as a mitotic inhibitor. In the UK the average course of treatment for lung cancer is four weeks and costs around $80,000 US.  Italy (8) and Canada (9,10)  typically sees costs of approximately $70,000 US.  The US fairs the best with costs of around $50,000 US (11).  These are certainly large amounts of money but one has to keep in mind a point of reference when considering costs of cisplatin combination therapies.  Heart transplants cost $210,000 on average in the US11 while liver and kidney transplants cost up to $400,000 and $150,000 respectively.  In light of this, cisplatin combination treatment is viable option in comparison with other terminal illness treatments.

paclitaxel

Figure 3. Paclitaxel.

Conclusion

Cisplatin was a chance discovery that eventually sprinted to the forefront of cancer therapies and research.  By creating platinum adducts with DNA strands, cisplatin is able to derail a cell’s normal DNA repair mechanisms and cause it to undergo apoptosis.  The synthesis of cisplatin is straight forward and cost effective which leads to a cheaper cancer therapy.  Because of this cisplatin can be administered to wide range of people from varied economic backgrounds.  Even after 50 years, cisplatin is a key player in the fight against cancer.

Resources

Cisplatin in a Clinical Setting
Chemical Information
Side Effects
Cisplatin Nanoparticles
Cisplatin in Animals

References

(1) Rosbenberg, B., Van Camp, L., Krigas, T., Inhibition of Cell Division in Escherichia coli by Electrolysis Products from a Platinum Electrode. Nature, 205, 698 (1965).

(2) Rosenberg, B., Van Camp, L., Grimley, E., Thomson, A., The Inhibition of Growth or Cell Division in Escherichia coli by Different Ionic Species of Platinum(IV) Complexes.  J. Biol. Chem., 242, 1347-1352 (1967).

(3) Wood, R., Nucleotide Excision Repair in Mammalian Cells. J. Biol. Chem, 272, 23465-23468 (1997).

(4) Modrick, P., Strand-specific Mismatch Repair in Mammalian Cells.  J. Biol. Chem, 272, 24727 (1997).

(5) Fichtinger-Sherpman, A., van der Veer, J., den Hatog, J., Lohman, P., Adducts of the antitumor drug cis-diamminedichloroplatinum(II) with DNA: formation, identification, and quantitation. Biochemsitry , 24, 707-712 (1985).

(6) Agarwal, M., Taylor, W., Chernov, M., Chernova, O., Stark, G., The p53 Network.  J. Biol. Chem., 273,1 (1998).

(7) http://www.cdc.gov/nchs/data/hus/hus06.pdf#027

(8) Novello, S., Cost-minimisation analysis comparing gemcitabine/cisplatin, paclitaxel/carboplatin and vinorelbine/cisplatin in the treatment of advanced non-small cell lung cnacer in Italy. Lung Cancer, 48, 379-387 (2005).

(9) Earle, C., Evans, W., Cost-effectiveness of paclitaxel plus cisplatin in advanced non-small-cell lung cancer.  Brit. J. Cancer, 80, 815-820 (1999).

(10) Covens, A., Boucher, S., Roche, K., Macdonald, M., Pettitt, D., Jolain, B., Souetre, E., Riviere, M., Is Paclitaxel and Cisplatin a Cost-Effective First-Line Therapy for Advanced Ovarian Carcinoma. Cancer, 77, 2086-2091 (1996).

(11) Kidney transplant costs.


 

silver

Humans have used and valued silver as far back as 3000 B.C. Continuing on in Greek, Roman, Indian, and Asian cultures as a precious metal, its historical occurrence is prominent (1). Initially valued for its luster, durability and malleability, its other properties soon gave rise to many more uses. The most utile of these is its antibiotic properties coupled with its relative inert effect on human beings. The first medical uses of silver were recorded in 980 AD when a Roman man named Avicenna used silver filings as a blood purifier, for offensive breath, and for palpitations of the heart (2). Soon its medical efficacy as an antibiotic was recognized and used in all manner of silverware, utensils, and coinage to prevent sickness and disease. Its known use as an antibiotic continued on with mariners and westerners in the early expansion of the U.S. who would use silver coins to purify and prolong the life of various liquids(3). In the modern era, however, its uses in public health were temporarily lost with the advent of organic antibiotics. Only in the past couple decades has silver reemerged as the powerful antibiotic as it was long known to be.

Roman Silver CupGreek Silver Coins

Silver is encountered in day to day practices in its metallic form as well as in powdery white (silver nitrate and silver chloride) or dark-gray to black compounds (silver sulfide and silver oxide) (4). It is common in many ores and is released in the mining of zinc, gold, and lead, contributing to silver levels of up to and less than 0.000001 mg silver per cubic meter of air (mg/m³), 0.2-2.0 ppb in surface waters, such as lakes and rivers, and 0.20-0.30 ppm in soils (4).

Silver US Gov

The most common introduction of silver in the environment now, however, is through its use as a reagent in photography. Although digital photograph has lessened the amount of silver used, silver halides will continue its pre-eminence because of high resolution and low costs via applications in movie and x-ray film. Silver is encountered more directly however in its metallic form through electronics, jewelry, dental fillings, prosthesis, and even as a food additive. Silver use is indeed pervasive and common in our everyday experience.

Silver HeartsSilver Ring

Silver Tooth

Ionic silver is absorbed through into the body most commonly in the gastrointestinal tract, more specifically the small intestine; however it has been shown to enter through the lungs, nasal mucosa, and skin (5). It is retained in most tissues of the body but most importantly does not show evidence that it crosses the blood-brain barrier, accounting for the lack of neurotoxicity in humans (6). Only ~10% of ingested silver is absorbed into the body, eventually being excreted in the urine and feces. The EPA recommends that the concentration of silver in drinking water not exceed 0.10 milligrams per liter of water (0.10 mg/L)(4).

Since silver has no apparent physiological role in any organism, the human body tries to get rid of it as soon as it is encountered. In humans, the transport and sequestering of excess silver is carried out by cysteine rich proteins, most notably metallothionein. The sulfhydryl groups of cysteine bind to silver, making a stable silver sulfide bond (4,6). This mode of action is similar to that of zinc, copper, cadmium, and mercury. Once sequestered, it is then expelled through the body in normal fashion and through desquamation, the sloughing of cells off of and through the body.

CysteineMetallothionein

Despite its relative inert role in normal doses, complications arise at higher doses. A condition called Argyria presents with a high enough intake and storage of silver. Symptoms include a bluish/grayish skin discoloration. Severe enough cases can even affect the eyesight of the diagnosed individual.  In even higher doses more damage is done leading to coma, pleural edema, hemolysis and death (2,3,4,5).

Silver ManSilver Woman

Although silver is known to be an effective antibiotic and antiseptic, its mechanism of action is up for debate. The most commonly accepted theory is the oligodynamic theory, where silver enters the organism and binds tightly to cysteine rich proteins causing inhibition. Additionally other silver-protein bonds can form between amino-, carboxyl-, phosphate-, and imidazole-groups leading to precipitation of Ag-protein (2). An alternative theory is that silver interferes with DNA replication causing programmed cell death (3).

Being relatively harmless to humans while being quite deadly to microbes and other organisms, its medical use as well as other ingenious applications have been abundant in recent years. Its function as an antiseptic make it ideal for incorporation into bandages and burn creams. Additionally, its electrical properties make it a mild analgesic, shorting out the pain receptor response. Being an antibiotic, many alternative medicine practitioners recommend colloidal silver to be ingested as an effective cure and preventative measure against many ailments. Silver is even being used as a disinfectant in applications ranging from wood floor (7) and toilet seat coverings, to water and air purifiers (8) and washing machines (9).

Silver Tech Wound

Silver TEM Nano

Silver is a versatile and convenient material that presents unmatched potential for innovation and utility, and has been shown to have done so for millennia. Its functions as a medical treatment and public health uses are fascinating and hopefully silver will continue to do its job long into the future.

References

(1) The Silver Institute Silver Facts: History of Silver

(2) Wadhira, A. Dermatology Online Journal, 11:1 (2005).

(3) Roy, R. et al., Mat. Research Innovations, 11:1 (2007).

(4) Agency for Toxic Substances and Disease Registry (ATSDR) Public Health Statement for Silver (1990)

(5) WHO-International Program on Chemical Safety. WHO Food Additives No. 12 (1977).

(6) Lansdown, A.B.G. Critical Reviews in Technology, 37:3, 273-250 (2007).

(7) Kim, Sumim. International Biodeteroriation and Biodegradation, 57, 156-162 (2006).

(8) Pedhazur, R. et al. Wat. Sci. Tec., 35:11, 87-93 (1997)

(9) Samsung. Samsung Nano Silver (2007).

Image Sources

Argyria photo man
Argyria photo woman
Silver nugget
Silver bandage
Silver candy
Silver ring
Silver tooth
Metallothionein-Me
Cysteine

Author: Noah Manson Prescott


 

Technetium did not arrive on earth in any appreciable amount until 1937 when a molybdenum atom was bombarded with deuterons by Carlo Perrier and Emilio Segrè.  This was not for lack of trying because element 43’s existence was predicted by Mendeleev in the mid 19th century (1). Technetium’s distinction as being a man made element, however, is only one of its curious aspects. As one might suspect it is also exclusively radioactive, being unstable in every form.  Still, technetium has  managed to be useful in several applications including chemical synthesis, nanoscale nuclear batteries, and nuclear medicine (1,2,3,4).

Bioavailability and Uptake

Although Technetium has isotopes with mass numbers ranging from 85 to 118 only 10 are regularly seen and of those only 3 are abundant, those being 97Tc, 98Tc, 99Tc. Technetium-99 is the most abundant of all isotopes since it is a major fission product in nuclear reactors (5). Moreover, the use of Technetium-99m in nuclear medicine is increasing Technetium-99 abundance. Technetium-99m is a metastable isotope that is a gamma decay product of Molybdenum-99, another abundant fissile product.  Indeed the amount of Tc in the environment is going up in all its forms but especially Tc-99.  Besides industry professionals who may handle Tc or encounter it in their work, the most common form of uptake is through water and food and even at that there is very little data to support how much is taken up. Since Tc-99m is a nuclear medicine tool, this is how it is most often encountered in humans. It is often intravenously injected in a molecular form that is best suited for a certain diagnostic purpose.  Technetium’s characteristics, however, allow it to pass through the body with ease and quickly (2,4).

technetium metal

Tc-99m/Tc99

Technetium-99m is the most prevalent diagnostic agent in nuclear medicine. It accounts for 85% of all diagnostic scans and is used around 20 million times per year (1). It is through its unique characteristics as a metastable radioactive isotope that gives it this dominance in the field. The first thing to note is that it is a powerful tool for looking inside a person because it emits gamma radiation that is easily detected (140.5 KeV) and provides a high resolution.

gamma radiation

More importantly, the amount of radiation it gives off is low enough that it is not detrimental to the patient. A common injection for a diagnostic test is 250 MBq which would give a radiation dose of 0.05 Sv,  far below the demarcation for radiation poisoning or harm.

Tc scan

TC imaging agent

Techentium-99m has a half-life of around 6 hours so there is plenty of time for any test and then the agent loses most of its radiation, furthermore the biological half-life of Tc in general is about 1 day so there is little time for it to do any damage at all. An added benefit to the entire process is that Tc-99m is produced from the decay of Mo99 which has a half-life of 66 hours, giving the agent a form in which it can travel and have some shelf-life.

tc generator

There has even been ome debate as to the positive effects of low-level radiation, like the kind that Tc-99m gives, because it activates DNA repair mechanisms in the body that can fix existing mutations (2). A final reason for its selection as the premiere diagnostic agent is Tc-99m non-specificity in the human body giving it the ability to identity many different organs depending on the molecular structure that is attached to Tc.  In one application, Tc-99m is even being used as a diagnostic for cancer by conjugating the metal with an antibody that is adept at identifying certain carcinomas (3). Some designs look for increased mitochondrial activity, others for certain macrophages or various immunological markers, all depending on the goal of the diagnostic. Ironically, capabilities are limited only by the knowledge of human disease.

Harmful Effects, Toxicology, Radiation

With the discussion of Tc-99m it seems like it is almost too good to be true and that it must have its drawbacks somewhere.  Indeed there is one drawback and that is that it still produces Tc-99 eventually and its half-life is 210,000 years.

Tc decay

Indeed, we may getting ahead of ourselves with all the applications of nuclear physics, in that the problem of what to do with the waste is not yet solved or even really seriously being considered. The radiation from external Tc, however, is not harmful unless in close proximity or really internally present. The key issue is then that it may not be a problem now but its accumulation in the environment can do unexpectedly disastrous things. Internally Tc99 is the only real threat and Tc-99 is easily taken up into plants and animals but does not seem to do any damage as it is just as easily metabolized by biological chelating agents. In humans the same is true where metallothionens take care of the metal efficiently. According to the EPA, the cancer coefficient for Tc-99 and Tc-99m, through food and water ingestion, is 4.28 E-11 and 1.22 E-12 (Bq-1), this epidemiological figure “takes into account age and gender dependence of intake, metabolism, dosimetry, radiogenic risk and competing causes of death in estimating the risks to health from internal or external exposure to radionucleotides (7). Unfortunately this is only understandable to those well versed in epidemiology or radiation in general, so for reference, K40 a radioactive isotope of potassium that is regularly ingested and currently resides in the human body in high amounts (~140g ) has a cancer coefficient of 4.30 E-10, a whole order of magnitude higher and it is in us all the time for our entire lives (8).  The matter of radiation is general is somewhat perplexing as relatively unstudied. Indeed this is the case for technetium where internal threat is cautioned but does not really seem to be a threat.

radioactive warning sign

Conclusion

This investigation in general has brought about the knowledge that radiation poisoning is relatively unstudied and little is known about what is tolerable and what is detrimental. But currently Tc-99m is an overwhelmingly powerful medicinal tool that seems to help more people than it hurts. With more research and understanding it can only be solidified as a truly entirely beneficial material.

References

(1) Technetium <http://en.wikipedia.org/wiki/Technetium (2007)>.

(2) Kemerink, Gerrit.  J. Nuc. Med., 44 947-952 (2003).

(3) Burcheil, Scott. J. Nuc. Med., 30 1351-1357 (1989).

(4) Hoh, Karl. Nuclear Medicine and Biology, 30 457-464 (2003).

(5) Agronne National Laboratory EVS. Human Health Fact Sheet (2005).

(6) Sievert <http://en.wikipedia.org/wiki/Sievert (2007)>.

(7) EPA. Federal Guidance Report No.13, (1999).

(8) Rowland, R.P. The Radioactivity of the Adult Human Body

Image Sources

Technetium Metal

Scan

Structure TC99m

Tc99m – Generator

Tc99m-Tc99

Tc99 uses

Decay table

Radioactive

Author: Noah Manson Prescott


 

Crookes discovered the transition metal thallium, atomic number 81, in 1861. (1) The name originated from the Greek word “thallos” which means green shoot or twig because of the green spectral line it produces. (1)  In the absence of oxygen the metal is a metallic bluish-gray; however, it is strongly oxidized in air and water, causing it to corrode and become a dull color. (4) The most common oxidation states for thallium are (I) and (III).  Thallium (I) behaves like the group 1 alkali metals, whereas Thallium (III) behaves more like aluminum, a group 3 element. (2) Thallium is tasteless and odorless; therefore, its present in our everyday lives it is not easily detectable and can be very toxic.

Tl minus O2

Figure 1. Thallium in an oxygen deficient environment (11).

Tl in O2

 
Figure 2. Thallium in the presence of oxygen (12).

Uses and Problems

Historically, before its toxicity was known, thallium was used medically for the treatment of multiple infections.  These infections include ringworm in the scalp, typhus, tuberculosis, gonorrhea and malaria. (3) Throughout history, thallium has also been used to kill rats and squirrels. (4) It has been present in low temperature thermometers, mercury lamps, types of glasses that have an increased melting point, imitation jewelry, fireworks, pigments, and in alloys to increase corrosive resistance; however, it is not used anymore due to the knowledge of the toxic effects. (2) The major use of thallium has been in electronics—semiconductors and crystals for infrared instruments. (4) The use of thallium in the majority of these products took a long time to be discontinued but the links between thallium and health problems were known by at least as early as 1932 in California due to the use as a rodent poison.  (3)
It is important to realize that even though most uses of thallium have been discontinued it is still very much present in our daily lives.  There are approximately 1,000 tons released into the environment per year, primarily from power, cement and smelting plants.  Of the 1,000 tons that are dispersed, 350 tons are in vapor and dust and 500 tons are in fluids and solid waste. The additional 150 tons are from a variety of sources.  Interestingly, some thallium is present in cigarette smoke.  Thallium is introduced into the body by inhalation, skin contact, or ingestion.  Humans on a daily basis consume approximately 2 ppb of thallium from food. (5)
It is important to note that thallium can be used for illegal reasons too, due to its tasteless and odorless properties.  It has been used to induce an abortion and for poisoning. (4) However, controversies about intentional versus unintentional poisoning have aroused due to the low concentration of thallium needed for toxicity.  An example of a poisoning case involves Zhu Ling from China.  Zhu Ling was a university student in China who started to reveal symptoms of thallium poising in 1994.  However, she was not officially diagnosed until 1995 because of denial of exposure to thallium.  Sadly, by the time she was treated she had already suffered greatly including loss of muscular control and neurological damage; therefore limiting her ability to speak.  The case is still unsolved as to whether or not she was poisoned by her roommate. (6)

Health Problems

The diagnosis of thallium poisoning can be difficult because the detection methods are relatively inefficient due to the low amounts in the body. (7) Generally, thallium poisoning presents itself within one to five days (8) and can be detrimental within eight to ten days, or even sometimes as quickly as a few hours. (7)  The first symptoms of intoxication are gastrointestinal irritations and nerve damage which later leads to hair loss, damage to the liver, kidney, intestinal and testicular tissues.  (9) Problems tend to arise when the concentration of thallium in the blood or urine is greater than 1 mg/L. (9) Inside the body, the thallium is absorbed through the skin, gastrointestinal and respiratory tracts, in which it is transferred throughout the body into the organs, placenta, and across the blood-brain barrier. Small amounts can be excreted from the body through the hair, saliva, kidney, skin, sweat, breast milk and the gastrointestinal tract. (7)  It is important to note that there is no data linking thallium poisoning with reproduction, birth defects or cancer. (7)

Medical Treatment

Several therapies are available if thallium poisoning occurs yet even with these therapies there are no federally approved treatments. (10)  It is important to realize that some of the effects of the poisoning cannot be reversed.   At the initial exposure the skin or eyes should be flushed immediately with water.  If a thallium substance is consumed, vomiting should be induced and then patients should undergo gastric aspiration and lavage.  An oral treatment that can be used is Prussian Blue (ferric hexacyanoferrate (II), Fe4III[FeII(CN)6]3•15H2O). (7,13)

prussian blue
Figure 3. Prussian Blue Molecular Structure (13).

 

Thallium is first present in the body in the intestines, and then the ions are absorbed into the bile and finally released back into the gastrointestinal tract and then can be absorbed again. Due to the high affinity of Prussian blue for thallium ions, in the presence of this agent, ion exchange can occur. (10) Medically, Prussian blue is administered orally and inhibits the absorption into the bile, which as stated previously, can lead to the absorption into the gastrointestinal tract.  The ion exchange causes the thallium ions to be concentrated and excreted, therefore reducing the toxic effects. Prussian blue is a good therapy because it is not absorbed in large quantities in the gastrointestinal tract. (10)

References

(1) “Thallium.” Los Alamos Natural Laboratory Chemistry Division. 15 Dec. 2003. University of California. 7 Sept. 2008.

(2) Thallium. Chicago: World Health Organization, 1996. 19,20.

(3) Tsai, Y.; Huang, C.; Kuo, H.; Wang, H.; Shen, W.; Shin, T.; Chu, N. Neurology. 2006, (27) 291-295.

(4) Kazantzis, F. Environmental Geochemistry and Health. 2000, (22) 275-280.

(5) Agency for Toxic Substances and Disease Registry (ATSDR). 1992 Toxicological profile for Thallium. Atlanta, GA: U.S. Department of Health and Human Services, Public Health Service.

(6) “Zhun Ling (poisoning victim)” Wikipedia.org. Accessed September 7, 2008.

(7)  Thallium and Thallium Compounds Health and Safety Guide. Number 102. Chicago: World Health Organization, 1996. 11-16.

(8)  Lee, A. G. The Chemistry of Thallium. New York, NY: Elsevier Compnay, 1971. 1-8.

(9) Selinus, Olle. Essentials of Medical Geology : Impacts of the Natural Environment on Public Health. New York: Academic P, 2005. 197-98.

(10) Federal Register. Vol 68, No. 23/ Tuesday, February 4, 2003. p. 5646.

Image Credits

(11) chemistry.about.com

(12) Wikipedia, Corroded thallium rod.

(13) Robin, M. Electronic Configurations of Prussian Blue. 1962. (1)2 337.

Author: Evan Joslin.


 

Nuclear power has the ability to produce energy without releasing harmful greenhouse gas emissions into the atmosphere.  However, the energy source is also dangerous because it produces radioactive waste that has to be sequestered from the environment for 10,000 years, so as not to cause harm to public health or the environment.

Introduction

Nuclear power is a relatively new energy source that is used commonly in the world and the US.  Only 20% of the electricity generated in the US comes from nuclear power.  Some countries such as France generate 77% of their electricity from nuclear power.  The diagram in Figure 1 demonstrates how a nuclear power plant works.  Electricity is generated in a nuclear power plant by harvesting the heat released when an atom such as uranium is split in half (1).

Figure 1:  Diagram showing how a nuclear power plant works (1).

Environmental/Public Health Impacts

Although many people assert that nuclear power is an environmentally conscious energy source because no greenhouse gases are emitted, it is in no way shape or form a sustainable or renewable energy source.  Mining uranium can have similar problems that arise with coal mining with the added problem that uranium mill tailings, waste formed by extracting the uranium, are radioactive.  Only 0.1% to 0.2% of uranium ore is made up of uranium, and of that only 0.7% of this uranium is in the correct form to be used in the reactor (2).  Uranium mining because sulfuric is used to extract the uranium causes contamination to ground water from radioactive metals and other metals.  In situ leaching is particularly harmful as diagramed in Figure 2, because the rock is not removed from the ground instead sulfuric is merely pushed into a deep aquifer(3).


Figure 2: A diagram of how in situ leaching mining works. (3)

A nuclear power plant meltdown such as the one that occurred in Chernobyl caused a significant amount of radioactive material to journey through Ukraine and all of Europe.  This huge radiation exposure largely occurred because there was no containment building.  In Three Mile Island a meltdown also occurred, but most of the radiation was secured inside the containment building.  Very strict restrictions have made nuclear power plants safer.  Nuclear power plants only emit 0.009 millirems/year, which is a negligible amount compared to natural background radiation (4).  Therefore, nuclear power plants themselves are fairly safe.
However, the waste produced at the end of the process has a huge potential to cause public health problems. Two levels of waste are created at nuclear power plants: low-level waste (LLW) and high-level waste (HLW).  LLW consists of cleaning items and other materials that are exposed to radiation.  Typically, LLW is compacted and burned in special facilities and buried in the ground (2).  HLW is defined as used nuclear reactor fuel.  As shown in Figure 3, the waste takes 10,000 years before the activity begins to level off.  Developing strategies to keep this waste carefully contained for 10,000 years has been difficult, and debates over a national repository have been occurring for quite some time (2).

Figure 3:  The radioactivity of various different radioactive metals in HLW over time (2).
New Technologies

Research has shown that thorium as a fuel source has distinct advantages over uranium.  Thorium is found at a higher concentration in the earth and all thorium can be used in the reactor, so less will have to be mined.  The use of thorium rather than uranium will decrease the magnitude and radioactivity of waste.  As shown in Figure 4, the radioactivity of the waste using a typical U-Pu fuel without processing is shown by the black line.  The Th-U mixtures both produce waste substantially lower in radioactivity than this line (5).  When using thorium spent fuel cannot be used in nuclear weapons, so thorium is safer to use (6).  The use of thorium in nuclear energy is still at the beginning stages.  More research is necessary to be sure that these benefits are true.  A large scale switch to thorium from uranium might be made in the near future, but cannot be done presently.

 

Figure 4:  The radioactivity of waste over time with the use of different fuels. (5)  

Comparison of Nuclear Power to Other Energy Sources
Coal and Nuclear Power
Coal plants directly just by burning coal release 8 metric tons of carbon dioxide on average per person living in a developed country per year as shown by Figure 5 below.  Coal also releases heavy metals such as lead, selenium, uranium, fluorine, and arsenic (7).  Mercury is also released from coal plants.  Mercury accumulation in fish is having a detrimental impact on human health. Young children and pregnant mothers are advised not to consume certain types of fish because of the mercury content.  The mercury content is only increasing and cannot be easily removed from the environment (8).  The heavy metals being released from coal will be present in our ecosystem forever, and will always be dangerous to human health (8).  A lot more coal is needed to be mined than uranium or thorium.  Not only does burning coal produce toxic gases, but around 300kg of fly ash, which contains uranium, thorium, and other radioactive chemicals, is produced per person in a developed country per year (4).

Figure 5:  Comparing nuclear and coal power as sources for electricity.  *8,000 kWh average use per person per year for developed country (2)

Nuclear Power and Renewable Energy Sources
Getting energy from renewable energy sources such as those diagramed in Figure 6 has recently been attractive as they do not rely on fossil fuels.  The only drawback to renewable energy sources is that they are intermittent, and there is no large-scale means to store electricity for later use (2).  If Americans continue to expect electricity during the night or on days without wind, we must continue to rely heavily on base load power, which is mainly either coal or nuclear power.

Figure 6:  Diagram representing how a diverse number of renewable energy sources can provide all of US electricity by 2050 (9)

Conclusion

Therefore, although nuclear power does pollute the environment and cause some negative impacts to human health, in comparison to the other solution namely investing in coal it is less hazardous to the environment.  If the US invests in reprocessing fuel and using thorium rather than uranium, nuclear power’s impact on the environment can be farther reduced.  Renewable energy sources such as wind and solar power are much cleaner than nuclear power, but because there is no efficient means to store energy these energy sources cannot yet provide all the energy necessary to power America.  Investing in these energy sources is key to prevent future harm to the environment and prepare the US for switching over to these sources when the technology to store electricity is discovered.  However, because of global warming concerns and huge negative impacts that coal has on the environment and public health, coal should be phased out before nuclear power.  Even increasing nuclear power slightly to reduce the impact coal has on the environment would be a reasonable temporary solution.  However, primarily the US should invest in more renewable energy sources particularly in states that have large amounts of wind and are sunny nearly every day of the year.  Hopefully, phasing out coal and replacing it with renewable energy sources and cleaner nuclear power will provide a good stepping stone to a future where the US runs entirely on renewable energy.
What to Do?

Create a website that would include the following:

  • The plan mentioned above
  • A comparison of all energy sources
  • Ways to conserve energy
  • Blog to express ideas
  • Letters to send to politicians
  • Allow selected companies to advertise to create money
  • Money Can be spent by
  •  Giving large grants to support research to find a way to store electricity and research in making nuclear power cleaner
  • Small grants to support community out reach in renewable energies

The website would attempt to attract the everyday person, who does not know very much about energy sources.

Author: Rebecca Schwantes


 

Commercial antifouling paints are used to prevent the buildup of organisms on underwater surfaces.  The accumulation of bio-matter can result in added weight and hydrodynamic drag to seagoing vessels and cause accelerated corrosion of underwater structural components.  Commercially used antifouling paints contain high levels of tin or copper.  The most effective antifouling agent is tributyltin (TBT), which is usually applied in the form of TBT oxide or TBT methacrylate.  Regulations passed by the International Maritime Organization in 2001 declared no TBT paints could be applied after Jan 1, 2003 and no remaining underwater surfaces could contain any trace of TBT after Jan 1, 2008.(1)  There is now an urgent need for replacement materials cabable of equal or better antifouling performance.

TBT-containing paints work through simple diffusion of the organometallic complex out of the paint matrix or through ablation of paint layers.  Once in open water, TBT is converted to a salt form with a chloride, hydroxide or carbonate counter ion.  Its presence has been found hundreds of miles off shore, while it is found in highest in concentration around harbors. Depending on light availability, pH and oxygen concentration, it can take a year or more to degrade to less toxic dibutyltin or monobutyltin forms.(2)

salt
Structure of TBT salt

TBT can be fatal to marine organisms even at low concentrations.  The LC50 values of TBT oxide for rainbow trout were found to be only 7.1 μg/L.3  The Pacific oyster showed very different responses to TBT depending on its stage of growth.  This illustrates the long accepted fact that TBT is more fatal to most species in their developing stages.  The larval stage of this oyster showed an LC50 value of 1.557 μg/L, while the adult showed an LC50 value of 282.2 μg/L.(4)  Copepods and mysid shrimp seem to be the most vulnerable with LC50 values close to or less than 1 μg/L for many species.(2,6)

TBT can produce strange effects such as widespread changes in the sex of gastropods and impotence in as small as ng/L concentrations.(5,7)  It has also been shown to seriously damage the immune response of flatfish living on the seabed where high levels of TBT are used.  Finally, TBT is known to accumulate in the food chain, appearing in various species of whales, dolphins and seals.(1)


Buildup of Barnacles on an Underwater Surface. (10)

Ship
The red stripe is coated with antifouling paint. The surface was left bare. (11)

It is clear that more environmentally friendly antifouling coatings must be found.  The phasing out of TBT usage will cause serious problems such as the need for more frequent coating of less efficient paints. An economic burden can be expected because slower ships will cause more fossil fuels to be used for transportation.  Additionally, marine species clinging to ship hulls can be introduced into non-native environments world-wide if proper care is not taken for their removal.

Currently, the most widely used alternatives are coatings containing even higher concentrations of copper (I) and copper (II) oxides than previous paints containing TBT.  Although copper oxides seem to be less toxic than TBT, much research is being done into finding totally non-toxic systems to prevent biological growth.  Nonstick coatings such as fluoropolymers are being investigated for their ability to prevent larval organisms from clinging to underwater surfaces, based on the same principle as a Teflon frying pan.(1)  The chemical capsaicin, which is found naturally in red peppers, has been demonstrated to reduce fouling when mixed with non-metal-containing marine paints.8  Finally, even more exotic alternatives are being pursued such as trying to synthetically mimic the surface features of shark skin which is known to prevent barnacle buildup.(9)

Resources

International Maritime Organization

Antifouling Paints

References

(1) International Maritime Organization Website (http://www.imo.org

(2) Environmental Protection Agency’s Ambient Aquatic Life Water Quality Criteria for TBT (2003).

(3) Hall, L. W. H. Marine Pollution Bulletin 19, 431-438 (1988).

(4) Martin, R. C., Dixon, R. J., Maguire, R. J., Hodson, P. J., Tkacz, R. J. Aquatic Toxicology 15, 37-52 (1989).

(5) Thain, J. E. Int. Counc. Explor. Sea, Mariculture Committee E:13 (1983).

(6) Sidharthan, M., Young, K. S., Woul, L. H., Soon, P. K., Shin, H. W. Marine Pollution Bulletin 45, 177-180 (2002).

(7) Hall, L. W. H. Marine Pollution Bulletin 19, 431-438 (1988).

(8) Bryan, G. W., Gibbs, P. E., Burt, G. R., Hummerstone, L. G. Journal of Marine Biology 66, 611-640 (1986).

(9) Irrigation Training and Research Center Evaluation of Antifouling Paints (http://www.itrc.org/reports/paints/paints.pdf)

(10) National Geographic News Online (2005) (http://news.nationalgeographic.com /news/2005/07/0722_050722_sharkskin.html).

(11) Image taken from: (www.woodshole.er.usgs.gov).

(12) Image taken from: (http://www.ortepa.org/pages/antifoulants.htm).

Author: Tyler St. Clair


 

Titanium dioxide (TiO2) is the most common inorganic filler in sunscreen formulations. It is also one of the most common components of the Earth’s crust, its two most common crystalline forms being anatase and rutile (1).  It is relatively nontoxic under normal conditions, but exposure to UV light can result in the formation of reactive oxygen species such as superoxide, hydroxyl radicals and singlet oxygen (2).  Further, TiO2 is known to transfect into living skin cells and thus warrants serious investigation into its phototoxic effects (3). (Figure 1).

 tio2 powder

Figure 1.  TiO2 Powder (4).

Sunscreens use TiO2 particles around 20-50 nm in size to scatter light at wavelengths below 400 nm in the UV region of the spectrum, but it does not scatter light at visible wavelengths and appears clear to the eye.  Additionally, the light which is not reflected from the particles by Rayleigh scattering is efficiently absorbed (3).  When exposed to UV radiation, TiO2 becomes an efficient semiconductor with a band gap of only 3.23 and 3.06 eV for its anatase and rutile crystalline forms respectively.  The excited electrons and holes from the valence and conduction bands migrate to the surface where they readily react with any adsorbed species (5).  The electrons react with molecular oxygen to form superoxide anions, while the holes react with hydroxide anions to form hydroxyl radicals.  Both are capable of inducing DNA damage and general oxidative stress (6). (Figure 2).

TiO2 + hv → TiO2 (e- / h+)

e- + O2 → O2-· → HO2·

h+ + OH- → ·OH

DNA + ·OH + O2 → DNA-OH + O2-·

Figure 2. Reactive Species Produced in Aqueous Environments and an Example of Subsequent Reaction with DNA.

The first biological studies examining the UV excitation of TiO2 nanoparticles were done in 1997.  The ability of these particles to produce oxidizers capable of damaging DNA is dramatic, showing damage to DNA both in vitro and in human fibroblast cells.  Only a handful of studies have been done since and have focused on confirming the photochemical pathways involved and examining the modification of TiO2 nanoparticles to more photochemically inert forms (3,5)

Various sunscreen formulations use surface-modified TiO2 particles with coatings such as silicon oxides, silicones, organosilanes, aluminum oxide, and manganese dopants.  All of these formulations actually seem to show an increase in cell apoptosis when irradiated with UV light compared to plain TiO2 except for the aluminum oxide-modified surfaces (2).  Thus it is clear that many of the formulations on the market are exacerbating an already existing tendency for oxidative damage to occur.  One encouraging study shows successful surface modification of TiO2 to very inert forms using thermally assisted chemical modification.  Figure 3 shows results from this study.  The relative amounts of damaged and undamaged DNA in modified and unmodified particles can be judged from the relative intensities of the gel electrophoresis bands.  Damaged DNA is represented by the upper band, while intact DNA is the lower band.  Irradiation times were 0, 10, 20 and 30 minutes respectively for each trial (7).

gel

Figure 3.  Gel Electrophoresis of DNA after UV Exposure to Unmodified (R8B) and Modified (R8A) TiO2 Particles

  It is clear that TiO2, though nontoxic under most conditions, can become extremely reactive under UV excitation in aqueous environments.  This provides us with a powerful lesson that care must be taken to consider a material’s application before making a judgment as to its safety.  Future studies will hopefully focus on the extent to which TiO2 can pass through the membrane of living skin cells and work to find better surface modifications to reduce reactivity.

Resources

OSHA on TiO2

General introduction to Sunscreens

Reactive Oxygen Species

References

(1) Ceramics Today article on TiO2.

(2) Rampaul, A., Parkin, I. P., Cramer, L. P. Journal of Photochemistry and Photobiology A: Chemistry 191, 138-148 (2007).

(3) Dunford, R., Salinaro, A., Cai, L., Serpone, N., Horikoshi, S., Hidaka, H., Knowland, J. FEBS Letters 418, 87-90 (1997).

(4) Image taken from: http://www.mariopilato.com/titanium-dioxide.ht

(5) Hidaka, H., Horikoshi, S., Serpone, N., Knowland, J. J. of Photochem. and Photobiol. A: Chemistry 111, 205-213 (1997).

(6) Brezova, V., Gabcova, S., Dvoranova, D., Stasko, A. J.l of Photochem. and Photobiol. A: Chemistry 79, 121-134 (2005).

(7) Serpone, N., Salinaro, A., Horikoshi, S., Hidaka, H. J.l of Photochem. and Photobiol. A: Chemistry 179, 200-212 (2006).

Author: Tyler St. Clair


 

A Tungsten Carbide and Cobalt Pulmonary Disease

In the early twentieth century, Germany developed a metal alloy that would be used in many different industries (1,2). Tungsten carbide (WC) and cobalt (Co) is an alloy that is hard enough to cut and polish many different metals, hard woods, and diamonds. The alloy is produced when tungsten carbide powder and cobalt powder are heated to approximately 1,500 oC under high-pressure (Figure 1). The resulting product is approximately 80% tungsten carbide and 10-20% cobalt and may contain other metals (3). The cobalt binds the tungsten carbide molecules together and makes the material very resistant and approximately as hard as a diamond (1). Because of this it is used in high-speed cutting, drilling, grinding, or polishing of hard materials (4).


Figure 1. The hard-metal production process (2)

Case Studies

Workers who are exposed to the powdered forms of tungsten carbide and cobalt (<10mm) are more susceptible to the disease because of the dust and aerosol particles in the air (2, 5). Occupational exposure to potentially dangerous substances is often of great concern to government organizations and university-based hospitals. The Center for Disease Control released an article in 1992 about a 35-year-old industrial plant worker who had been exposed to aerosolized tungsten carbide and cobalt powder. In 1989 he had reported 21 months of shortness of breath and a chest radiograph showed interstitial abnormalities. An open-lung biopsy showed interstitial fibrosis, many macrophages, and multinucleated giant cells in the alveolar spaces (air sacs in the lungs). Upon testing of the biopsy, no tungsten or cobalt was detected. A year before his biopsy, a supervisor at the same plant, who had worked in the same department, died of acute pulmonary fibrosis and was diagnosed with hard metal pulmonary disease. A biopsy a few years prior to his death had shown multinucleated cells, interstitial fibrosis, and macrophages that are consistent with multiple pulmonary diseases. They reexamined his biopsy and detected the presence of tungsten but not cobalt. After these two occurrences, OSHA investigated the airborne cobalt levels in the plant and found in one instance the levels at 90% of the OSHA PEL. The process of the metal coating was adjusted and radiographs were taken of the 40 metal-coating employees (5).

The first reported occurrence of a pulmonary disorder associated with hard metal production or use was in 1940 in Europe (1, 2, 5). Twenty-seven workers who had been exposed to hard metal dust were examined after working in a hard metal factory that had been in operation for two years. Radiographs of the chest were taken of each worker and eight of the workers showed reticular shadowing in areas of fine nodulation, which suggested the beginning of pneumoconiosis (inflammation followed by scaring of the lung tissue) (2, 6). Also, in 1951 two men who had worked with the powders for 10 and 30 years had radiographs taken of their chests. They had suffered from dyspnea and the latter died from cardiac failure due to emphysema and chronic bronchitis (2).

lungs

Figure 2. Radiographs of healthy lung (left), 13 year (middle) and 23 year (right) exposure.

Case studies of hard-metal workers were performed from the 1940s to 1960s. Radiographs were taken of each worker. Figure 2 is a radiograph of healthy lungs and figures 3 and 4 are radiographs of lungs from a man who mixed the powders for 13 years and sharpened for 23 years, respectively. The patient in Figure 2 center has heavy hilar (openings by which nerves, ducts, or blood vessels enter or exits organs) shadows and profuse micro-nodular opacities. Most of his problems occurred in the mid and lower zones of his lungs. Figure 2 right shows that the second patient has an enlarged heart and small nodules. Also, he had an increase in translucency at the right base with destruction of both costophrenic angles (where the diaphram meets the ribs). Both patients had an increase in linear markings (2). As can be seen from the case studies discussed and the radiographs of these two patients, hard-metal workers who are susceptible to this disease have a poor prognosis.

What are the Symptoms of Hard-metal Disease?

There are many signs/symptoms of hard-metal disease. A patient may have tightening of the chest, cough, clubbing, external dyspnea (shortness of breath), fatigue, the production of sputum, and weight loss. Once the patient experiences some of these symptoms and seeks the advice of a medical professional, a radiograph is usually taken of the lungs. The usual finding is interstitial patterns suggesting fibrosis (8).

What Signifies Hard-metal Disease?

Once a radiograph shows interstitial fibrosis, a biopsy is taken of the lungs. The presence of tungsten carbide is the first indicator. Cobalt is not normally present because of its high biological solubility (5). The biopsy usually shows the presence of macrphages and multinucleated giant cells (Figure 3) that have engulfed the surrounding cells (4, 8).


Figure 3. Multinuclear giant cells engulfing the cells nearby (4)

Will All Hard-metal Workers Develop the Disease?

No. There is no correlation between the lengths of time working in the industry as compared to the progression of the disease. Some workers are individually sensitive to the particles (3). Researchers have attempted to suggest a route or mechanism of cause but have not been entirely successful. Some have suggested an autoimmune disease (3). It is believed that cobalt in the presence of tungsten carbide is what exacerbates the disease. Tests have been performed on rats, guinea pigs, and mini-pigs and tungsten metal and carbide alone did not reproduce the same results as when cobalt was present. Most of the animals showed the presence of the multinucleated giant cells and macrophages but the research was incomplete and did not produce a proper mechanism by which the disease takes place.

A Hypothesis on the Effect of Cobalt in the Lungs

A Fenton-like reaction can occur that causes cobalt to replace ferrous ions for the transportation of oxygen. This can produce hydroxyl radicals, which are referred to as an activated oxygen species (AOS). When cobalt and tungsten carbide particles are in close contact, electrons are donated by cobalt to the surface of the tungsten carbide particle surface. These can then in turn reduce oxygen to generate an AOS. The oxidized cobalt then goes into solution. This provides an explanation as to the reason cobalt is usually not found in a biopsy. Lison et al. had not completed the research required to confirm this hypothesis. They also give this radical formation as a reason only 1-5% of the hard metal industry’s population actually develops the disease. If a worker does not have a strong antioxidant defense then they may not be able to neutralize the radicals before damage has occurred (9).

Treatment

Unfortunately no cure has been developed. Once diagnosed, cortisteroids are the form of treatment but usually do not reverse the effects of the disease. Once contracted, the prognosis is poor (3, 4).

Regulations

OSHA has set the limit of hard metal in the air at 50 mg/m3. This will hopefully prevent workers from becoming sensitive or protect those who have already become hypersensitive to the particles (5).

References

  1. Fischbein, A.; Luo, J. J.; Solomon, S. J.; Horowitz, s.; Hailoo, W.; Miller, A. Clinical findings among hard metal workers. Brit. J. Industr. Med. 1992, 49, 17-24.
  2. Bech, A. O.; Kipling, M. D.; Heather, J. C. Hard Metal Disease. Brit. J. Industr. Med. 1962, 19, 239-252.
  3. Ruediger, H. W. Hard Metal Particles and Lung Disease: Coincidence or Causality? Respiration 2000, 67, 137-138.
  4. Cleveland Clinic. Occupational Lung Diseases. (accessed 22 September 2008).
  5. Center for Disease Control. Pulmonary Fibrosis Associated with Occupational Exposure to Hard Metal at a Metal-Coating Plant—Connecticut, 1989. (accessed 22 September 2008).
  6. Aetna InteliHealth. Pneumoconiosis. (accessed 30 September 2008).
  7. HubPages. Air Purifiers: What are they? And do you need one? (accessed 29 September 2008).
  8. Haz-Map: Occupational Exposure to Hazardous Agents. Hard Metal Disease. (accessed 22 September 2008).
  9. Lison, D.; Lauwerys, R.; Demedts, M.; Nemery, B. Experimental research into the pathogenesis of cobalt/hard metal lung disease. Eur. Respir. J. 1996, 9, 1024-1028.

Author: Morgan Moyer


 

Vanadium is a trace element that is believed to have a biological significance.  The role that vanadium plays in biological systems is still being investigated.   The element was first discovered in 1813 by mineralogist Del Rio.  Del Rio was convinced that it was an isotope of chromium (1).  The element was then rediscovered in 1831 by Sefström.  The name vanadium comes from vanadis, a nickname for the Germanic goddess of beauty (1, 2).
Vanadium is present in the Earth’s crust with the average concentration being 35 ppm, and 2 ppm in sea water (2).  The element is found naturally in more than 65 minerals.  Some examples are magnetite, sandstone, and phosphate rock. Vanadium is isolated from ores as a byproduct or coproduct (Figure 1) (3).

vanadium metal
Figure 1. Vanadium metal.

Vanadium is primarily used as an alloy in the steel industry.  Eighty percent of vanadium is used as ferrovanadium (FeV), a strenghthening agent.  Ferrovandium is prepared by reacting crude iron with vanadium pentaoxide.  Vanadium pentoxide is also used in the making of ceramic and glass.  A significant chemical application of vanadium is the use of vanadium pentoxide as a catalyst in the production of sulfuric acid (4).

Role in Biology

Very little is known about the biological function of vanadium.  It is most commonly found in the +4 and +5 oxidation state in the form of vanadyl (VO2+) and vanadate (VO3­­-), respectively.  Various oxyanions and cations act as oxidizing agents (5).

Humans usually consume 10-60 μg of vanadium through foods daily.  The human body is estimated to contain 50-200 μg of vanadium.  In each organ, vanadium is present at very low concentrations, 0.01-1 μg, and is thought to play a role in a wide variety of physiological processes.  In tissues, approximately 90% of vanadium is bound with proteins and 10% is present in the ionic form.  The importance of vanadium pertaining to the growth of rats and chicks has been determined, but this has not been established in humans. A few living systems contain vanadium.  Examples of such systems are species of ascidian, where the presence of a vanadium-binding protein is presumed; amanita as a species of mushroom contains a purple-blue vanadyl complex, amavadin; few species of brown algae contain a metalloenzyme, bromoperoxidase, in the vanadate form; and some species of polychaete form (2).

Treatment of Diabetes

Diabetes mellitus (DM) can be classified in two different types.  Type 1 is insulin-dependent and type 2 is noninsulin-dependent.  Type 1 diabetes is caused by the destruction of beta cells.  These cells are responsible for the production of insulin, a hormone that regulates blood glucose levels.  Type 2 diabetes is caused by a variety of factors such as aging and obesity.   When DM develops many severe secondary complications, including atherosclerosis (a disease resulting in loss of elasticity), microangiopathy (disorders of the blood capillary), renal dysfunction and failure, cardiac abnormalities, diabetic retinopathy (defect of the retina), and ocular disorders that often result in blindness can occur. The treatment of type 1 and type 2 DM are by injection of insulin and synthetic therapeutic drugs, respectively.   Unfortunately, these methods of treatment have some faults.  Frequent injections of insulin are painful and increase patient stress, especially in young people, and synthetic therapeutics often have side effects (6, 7).  Figure 2 illustrates the steps, 1-7, that insulin triggers in the cell.  When insulin binds to the receptor, cell signals activate the glucose transporter causing an influx of glucose, followed by synthesis of glucagon, glycolysis, and fatty acid synthesis.

insulin binding
Figure 2. Insulin binding to the receptor causing various responses.

Medicinal inorganic chemistry is a relatively recent field.  With the discovery of the platinum-containing anti-cancer drug, cisplatin, as well as the use of gold in the rheumatoid arthritis drug, auranofin, other micronutrients are now being investigated.  The initial use of vanadium to treat diabetes was in 1899 (6).  Lyonnet, et al. tested the ability of sodium vanadate (NaVO3) to lower blood glucose levels in their patients.  The compound was administered to their 60 patients, three of whom had diabetes.  Of those three, two of them had slight lowering of sugar levels, and no side effects were noted in any patient (7).

More than 100 years later, vanadium is no closer to becoming an approved treatment for diabetes.  The discovery of insulin in 1922 took the focus off of vanadium complexes for treatment of diabetes, as hormone supplements became the major treatment for the disease (6, 8).  Vanadyl sulfate (VOSO­4) soon replaced sodium vanadate in animal testing due to the decreased toxicity of vanadyl compared to vanadate (6-10 times less toxic (1)).  Also, much of the vanadate administered is found in the vanadyl form (6).
Metal complexes are now the focus of many studies. Bis(maltolato)oxi-vanadium (BMOV) (Figure 3) was the first vanadium complex prepared that has increased effectiveness against diabetes.  There is also an increase in uptake and tolerability associated with the complex (8, 9).  There are differences in distribution of ionic verses complex forms of vanadyl.  Rats given vanadyl sulfate showed traces of vanadium in the kidney, liver, bone, and pancreas.  Those given vanadium complexes had high distribution in bone, followed by the kidney, spleen, liver, and pancreas.  The differences in distribution may help to explain the differences in toxicity and long-lasting effects (6).
bmov
Figure 3.  Bis(maltolato)oxovanadium (BMOV) (9)

Many studies have shown that the complexes of vanadium lower glucose levels both in vitro and in vivo.  Zhang, et al. treated STZ rats with 15 mg/mL sodium vanadate (V) or an herbal decoction of sodium vanadate (HV) with S. Bunge, a plant herb native to China, in their drinking water.  There were also a diabetes control group who did not receive treatment (D) and a nondiabetic control (C).  The effects are shown in Figure 4.  Both vanadium treated groups showed a lower blood glucose level over the test period.  Twenty-five percent of the V group rats died due to hypoglycemia or diarrhea.  None of the HV rats died.  The authors speculate that the antioxidants of the herb reduced metal toxicity.  Figure 5 illustrates the post-study vanadium accumulation of various organs in the rats.  The V group rats had high accumulation of vanadium while the HV group had a much lower accumulation.  Again, this was claimed as an effect of the herbal antioxidants (11).  Zhang illustrated that it is possible to lower the toxic effects of vanadate with herbal supplements.
blood glucose levels
Figure 4.  Blood glucose levels of rats given a form of vanadium for treatment of diabetes. (11)
v accumulation
Figure 5.  Vanadium accumulation after treatment. (11)
Knowing that vanadium compounds can treat diabetes, researchers then questioned whether or not vanadium complexes would prevent the onset of diabetes.  A summary of the study was written by Sakurai (6).  Rats were given vanadyl sulfate while being injected with streptozotocin (STZ) to induce diabetes.  A second group of rats were only given STZ.  The result was that the STZ rats receiving vanadyl sulfate had a delayed onset of diabetes.  The mechanism is hypothesized to impact the NO production by macrophages.  Macrophages (Mø) are cells that are part of the immunodefense system.  They are responsible for phagocytosis, or engulfing of cellular debris and pathogens.  They produce NO free radicals which are made by nitric oxide synthesase (iNOS) to be used against pathogens.  A high concentration of NO is thought to cause the production of OH free radicals which causes damage to beta cells (responsible for insulin production).  Vanadyl inhibits the production of NO, therefore delaying the onset of type 1 diabetes (Figure 6) (6).
V macrophage NO

Figure 6.  The effect of vanadyl on macrophage (Mø) NO production in normal and diabetic cells. (6)
The mechanism for lowering blood glucose levels with vanadyl is speculative and unclear.  It is possible that vanadyl acts at as many as three sites as shown in Figure 7.  The similarity of vanadyl behavior to phosphate causes it to inhibit protein tyrosine phosphatases (PTPase).  This enzyme is responsible for dephosphorylating tyrosine residues in proteins (6, 7, 10).  This causes protein tyrosine kinases to be activated and phosphorylate insulin receptor substrates (IRS) (Lower left in Figure 4).  The phosphorylated IRS then attracts various signaling proteins.  Signals are sent through the cell initiating two cascades that lead to glucose transport and glycogen synthesis.  A second target of vanadium is PTEN, which acts as a phosphatase.  This prevents dephosphorylation and allows for cell signaling to occur by a similar pathway (7).
v mechanisms in the body

Figure 7.  Proposed vanadium mechanisms in the body. (8)

Toxicity

Vanadium accumulates in bone and kidney tissue.  The similarity to phosphate makes it easily stored in bone. The long term effects of accumulated vanadium are unknown and under investigation (11).  Gastrointestinal side effects and weight loss occurred in several studies (7, 11).  The carcinogenic effects of vanadium have not been investigated fully.  Developmental and reproductive side effects of vanadium are also known (11).  In order to move forward and develop a potential pharmaceutical for the treatment of diabetes, more studies must be completed to find answers to the unanswered questions regarding toxicity and the mechanism of action.

Image Sources

Vanadium metal
Insulin mechanism
References

(1) Poucheret, P., Verma, S., Grynpas, M. D., and McNeill, J. H. Vanadium and diabetes. Mol. and Cellular Biochem. 188:73-80 (1998).

(2) Sakurai, H., Fujisawa, Y., Fujimoto, S., et al.  Role of vanadium in treating diabetes.  J. of Trace Elements in Exp. Med. 12:393-401 (1999).

(3) U.S. Geological Survey.  Mineral Commodity Summaries – Vanadium. January 2008. Accessed September 30, 2008.

(4) Mineral Information Institute. Vanadium. Accessed September 30, 2008.

(5) Selinus, O., editior. Essentials of medical geology: Impacts of the natural environment on public health. Elsevier Inc., Burlington, MA. (2005).

(6) Sakurai, H.  A new concept: The use of vanadium complexes in the treatment of diabetes mellitus. The Chemical Record. 2:237-248 (2002).

(7) Srivastava, A. K., Mehdi, M. Z. Insulino-mimetic and anti-diabetic effects of vanadium compounds. Diabetic Medicine. 22:2-13 (2004).

(8) Thompson, K. H., and Orvig, C.  Vanadium in diabetes: 100 years from phase 0 to phase 1. J. of Inorg. Biochem. 100:1925-1935 (2006).

(9) Verma, S., Cam, M. C., McNeill, J. H.  Nutritional factors that can favorably influence the glucose/insulin system: Vanadium. J. of the Amer. College of Nutrition. 17(1):11-18 (1998).

(10) Peters, K. G., Davis, M. G., Howard, B. W., et al.  Mechanism of insulin sensitization by BMOV (bis maltolato oxo vanadium); unliganded vanadium (VO4) as the active component. J. of Inorg. Biochem. 96:321-330 (2003).

(11) Zhang, L., Zhang, Y., Xia, Q., et al. Effective control of blood glucose status and toxicity in streptozotocin-induced diabetic rats by orally administration of vanadate in an herbal decoction. Food and Chemical Toxicology. 46:2996-3002 (2008).

Author: Joseph Houck


 

 

About Timothytrespas

I am a victim of human experimentation MK-ultra mind control Morgellons nanotechnology syndrome & remote neural connectivity. I am an artist, inventor, musician, thinker, lover, human being who cares for all humanity & all life. I believe people should endeavor to live in peaceful cooperation rather than brutal waring survival of the most brutal. We live in a forced false-paradigm and I desire to wake people up from the 'trance hypnotic mind control programming' to the 'TRUTH of light and love'! Blessing and peace. Justice to all who suffer under tyranny. Compassion for all beings. May GOD have mercy on us all.
This entry was posted in metals and the human body and tagged , , , , , , , , , , , . Bookmark the permalink.

1 Response to METALS IN MEDICINE AND THE ENVIRONMENT-Aluminum and Alzheimer’s Disease

  1. You’re so cool! I do not think I’ve read through something like that
    before. So nice to discover another person with some
    genuine thoughts on this subject matter. Seriously.. many
    thanks for starting this up. This site is one thing that’s needed on the
    web, someone with a bit of originality!

    Like

I would love to hear your thoughts and opinion in these issues. Please Leave a Reply: