Carbon Dioxide Emissions

Put the independent variable on the x-axis, put the dependent variable on the y- axis. 3. Label each axis with a quantity and a unit. 4. Give the graph a detailed title that includes the independent variable and the dependent variable. 5. Take a screen shot of the graph and paste it here. Conclusion: 1 . Summarize in one sentence whether or not the changes of the two share a pattern 2. Point out any strange results that may have occurred. Can you explain them? 3. Write a sentence that compares the results to the hypothesis. 4. Explain the conclusion scientifically.

This means you interpret the data by explaining what the patterns mean. Use scientific language, and be specific. Do research to find explanations. Cite the sources here. [Remember to write the full source at the end in the Works Cited list. ] 1 . Yes there is a pattern between these two results. As more atmospheric CA, I see that the altitudes of Arctic Ice are decreasing. However, there are some strong winters that make the melted ice change back to sate in water. But also this effect is starting to get a problem. Strong Winters haven’t been occurring much as the sass’s. . I had asked Ms Suzanne why the Arctic Ice’s extent changes back to ice. She said it’s because of the strong winters that occur. Another source I found from the Internet is that strong winters aren’t tough as it used to be during 1900 to 1980. 3. More greenhouse gas emissions, sea levels will rise. 4. In conclusion, in the fact that if all the Arctic Ice melts the worlds sea level would rise. However, my hypothesis was wrong. Sea levels wouldn’t rise if all the ice in the Arctic would melt. This is because it takes all the world’s ice to make sea levels increase.

If carbon dioxide emissions get worse the sea’s level would rise up to about 62 meters. Atmospheric carbon dioxide can cause more consequences than Just sea level intensities. It can cause extinction for the animals that live in cold climates and the many lives of public health will get an impact from greenhouse gas emissions. Therefore, since we can’t destroy carbon dioxide we can maybe reduce the fossil fuel combustion and oil productions. We can maybe produce cars that are powered by electricity. 1. This is where you list all the sources you cited in your lab report. . Make sure this list only has sources you already cited in parenthesis 0. 3. Make sure the first word in parentheses is also the first word in the entry on the works cited list. 4. Indent after the first line of each entry. “Global Warming. ” Facts, Causes and Effects of Climate Change. Web. 17 May 2014. Silverman, Jacob. Why Is Arctic Ice Melting 50 Years Too Fast? “Housework’s. Housework. Com, 05 swept. 2007. Web. 18 May 2014. “Early Warning Signs of Global Warming: Arctic and Antarctic Warming I CUSCUS. ” unto of concerned scientists. CUSS. Web. 20 May 2014.

Teacher Decision Student Opinion Level descriptor The student is able to: 1-2 collect and present data in numerical and/or visual forms accurately interpret data state the validity of a hypothesis based on the outcome of a scientific investigation 3-4 collect and present data in numerical and/or visual forms correctly accurately interpret data and describe results outline the validity of a hypothesis based on the outcome of a scientific investigation 5-6 collect, organize and present data in numerical and/or visual forms correctly accurately interpret data and describe results using scientific reasoning describe the validity of a hypothesis based on the outcome of a scientific investigation 7-8 collect, organize, transform and present data in numerical and/or visual forms correctly accurately interpret data and describe results using correct scientific reasoning discuss the validity of a hypothesis based on the outcome of a scientific investigation

Read more

Historical Development in the Field of Toxicology

Table of contents

Historical development in the field of toxicology and mechanisms and factors responsible for the entrance of toxicants in the human body and their harmful effects

Abstract

The purpose of this paper is to make a short historical reference in the field of Toxicology and how this area of science has develop starting from centuries ago until our present. It is also the intention of this paper to explain how the toxics enter our body, how they are absorbed and the mechanisms responsible for that.

Introduction

As stated by E.Monosson, some define Toxicology as the study of toxic materials, including the clinical, industrial, economic, and legal problems associated with them. Although toxicology—as a formally recognized scientific discipline—is relatively new (with major developments in the mid-1900s), the science itself is thousands of years old. Consider the potential results of early trial and error experiences of hunter-gatherers for whom identifying a toxic plant or animal was a life or death situation. Some of the most poisonous substances known today are naturally produced chemicals including ricin from castor beans or tetrodotoxin from the puffer fish.

Early humankinds’ careful observations of such plants or animals with toxic characteristics as frogs, containing curare, were put to use not only for avoidance of toxic substances but for weaponry as well. Many naturally-derived poisons were likely used for hunting, as medicinal (the Egyptians were aware of many such toxic substances as lead, opium and hemlock as early as 1500 BCE). Use extended eventually to political poisonings as practiced, for example, by the early Greeks and Romans. With time, poisons became widely used and with great sophistication.

Notable poisoning victims include Socrates, Cleopatra, and Claudius. One of the more interesting stories resulting from a combination of both ancient history and current toxicological research, is the story of King Mithridates, king of Pontus (120-63 BC) who according to toxicology legend was so afraid that he might be a casualty of political poisoning, is said to have concocted a potion from a great number of herbs for his own consumption. It is believed he understood that by consuming small amounts of potential poisons, he might protect himself from any would-be poisoner.

That is, he believed in the effectiveness of hormesis. Apparently, his plans worked so well that he gained a name for himself as one so mighty he could not be killed. Unfortunately, it is said that when circumstances were such that he desired to kill himself, he was unable to do so by ingesting poison and had to be run through by a sword instead. Whether or not the story is true, it has led current day scientists to speculate upon the ingredients of his potion. It is believed that some herbs that he may have used, for example, St. Johns Wort could truly have contributed to detoxification of some other poisons.

Recent studies have demonstrated that St. Johns Wort (often used as an herbal remedy) can increase the metabolism or breakdown of certain drugs and chemicals. This early story of toxicology relates a very important concept—that all animals have some kind of intrinsic ability for detoxifying a number of naturally-occurring toxicants in small doses (so that, in some cases low doses of chemicals may pass through the body without causing harm. From this we derive the concept of a chemical threshold), and that these processes can be altered by exposure to other chemicals.

The question remains as to how adept animals, including humans, are at detoxifying many of the newer industrial chemicals or mixtures of industrial or industrial and natural chemicals. Additionally, it is well known that in some cases, detoxification of chemicals can produce even more toxic compounds.

Pre-Industrial Toxicology

As declared by E. Monosson, as humans sought to better understand natural compounds that were both beneficial and harmful to them, there was very little if any clear understanding of the fundamental chemical nature of substances.

That is, there was no connection between the ‘extract’ and ‘essence’ of a poisonous plant or animal and any one particular chemical that might cause toxicity. In fact, an awareness of chemistry in its modern form did not occur until around the mid to late 1600s. Paracelsus, a physician from the sixteenth century and one of the early “Fathers of Toxicology” believed that all matter was composed of three “primary bodies” (sulfur, salt, and mercury). Yet, Paracelsus also coined the now famous maxim of the newly emerging discipline of toxicology: “All substances are poisons, there is none which is not a poison.

The right dose differentiates a poison from a remedy. ” (Paracelsus, 1493-1541) This phrase and Paracelsus’ name are committed to memory by hundreds of new toxicology students each year and has become the ‘motto’ of toxicology. Interestingly, if one takes Paracelsus at face value, it appears that in this quote he was referring to substances which served as potential remedies but could be poisonous if taken in high enough concentrations. Most of us are aware of the fact that overdosing can turn remedies to poisons, even with such apparently innocuous drugs as aspirin and Tylenol.

Another branch on the toxicology family tree that developed in the sixteenth century, along with the study of drugs and the use of chemicals in hunting and warfare, was occupational toxicology. As humans learned how to remove and exploit such materials as coal, and metals and other minerals, occupational exposures to these chemical substances (and chemicals produced incidentally) resulted. Scientists eventually recognized the linkages among illnesses and exposures to these compounds.

Some of the first reports of occupational illness, or diseases caused by activities related to specific occupations, can be found in literature from the mid- to late-1500s. Early occupational observations include the ill effects from lead mining and madness caused by mercury exposure (for example, the saying “mad as a hatter” was attributed to the common use of mercury in the hat felting process). Later, in the 1700s, Bernardino Ramazzini is credited with bringing to light diseases of tradesmen, including silicosis in stone workers and lead poisoning.

In the late 1700s, Sir Percival Potts made one of the more famous observations in toxicology, linking an occupational exposure (in this case soot in chimney sweeps) to cancer of the scrotum. At this point we have discussed the pre-Industrial Revolution developments in toxicology, that were primarily devoted to the study of such naturally-occurring toxicants as the polyaromatic compounds contained in soot and heavy metals, and such toxins as botulinum toxin produced by the bacterium Clostridium botulinum.

Toxicology and the Chemical and Industrial Revolution

The chemical/Industrial Revolution of the mid-19th century released many naturally-occurring chemicals into the environment in unprecedented amounts. Also, it produced and released new substances unlike any that had existed in the natural world. With the production and use of these chemicals, and the need to protect humans from the toxic effects of industrial chemicals, toxicology eventually evolved to include its modern day branches: pharmacology, pesticide toxicology, general toxicology, and occupational toxicology.

Towards the mid-late 20th century, environmental toxicology was developed to specifically address the effects on both humans and wildlife of chemicals released into the environment. A notable difference among the branches of toxicology is that pharmacology, pesticides and even occupational toxicology primarily have focused on the effects of relatively high concentrations of single chemicals. This compares to the relatively low concentrations of several different chemicals or chemical mixtures that are relevant to environmental toxicology. The chemicals considered by the earlier branches of toxicology were, and are, a known quantity.

That is, the research was designed to address questions about specific, well-characterized chemicals, exposure conditions, and even concentration ranges rather than complex chemical mixtures. For example, pharmacologists might work with a particular active ingredient (e. g. , salicylic acid or aspirin), and be confident about the route of exposure (oral) and the concentration or dose. This is seldom the case in environmental toxicology, and hazardous waste assessment and cleanup in particular, where chemicals often are present in mixtures, routes of exposure may vary (for example, from oral to dermal to inhalation). Significantly, exposure concentrations prove difficult to determine.

Mechanisms and Factors Responsible for the Entrance of Toxicants in the Human body and their Harmful Effects

Absorption of toxicants Absorption is the process whereby toxicants gain entrance to the body. Ingested and inhaled materials, nonetheless, are considered outside the body until they cross the cellular barriers of the gastrointestinal tract or the respiratory system. To exert an effect on internal organs a toxicant must be absorbed, although such local toxicity as irritation, may occur.

Absorption varies greatly with specific chemicals and with the route of exposure. For skin, oral or respiratory exposure, the exposure dose (or, “outside” dose) is usually only a fraction of the absorbed dose (that is, the internal dose). For substances injected or implanted directly into the body, exposure dose is the same as the absorbed or internal dose. Several factors affect the likelihood that a foreign chemical or, xenobiotic, will be absorbed. According to E. Monosson, the most important are:

  • Route of exposure Concentration of the substance at the site of contact
  • Chemical and physical properties of the substance

The relative roles of concentration and properties of the substance vary with the route of exposure. In some cases, a high percentage of a substance may not be absorbed from one route whereas a low amount may be absorbed via another route. For example, very little DDT powder will penetrate the skin whereas a high percentage will be absorbed when it is swallowed. Due to such route-specific differences in absorption, xenobiotics are often ranked for hazard in accordance with the route of exposure.

A substance may be categorized as relatively non-toxic by one route and highly toxic via another route. The primary routes of exposure by which xenobiotics can gain entry into the body are:

  • Gastrointestinal tract: Key in environmental exposure to food and water contaminants and is the most important route for many pharmaceuticals.
  • Respiratory tract: Key in environmental and occupational exposure to aerial toxicants and some drugs that use this route (i. e. : inhalers).
  • Skin: Also an environmental and occupational exposure route.

A lot of medicines are applied to the skin directly. Other routes of exposure—used primarily for specific medical purposes—are:

  • Injections (IV, Subcutaneous, Intradermal, Intrathecal) basically used for medications.
  • Implants (Hormone patches)
  • Conjunctival instillations (Eye drops)
  • Suppositories For a toxic to enter the body (as well as move within, and leave the body) it must pass across cell membranes (cell walls). Cell membranes are formidable barriers and major body defenses that prevent foreign invaders or substances from gaining entry into body tissues.

Normally, cells in solid tissues (for example, skin or mucous membranes of the lung or intestine) are so tightly compacted that substances cannot pass between them. Entry, therefore, requires that the xenobiotic have some capability to penetrate cell membranes. Also, the substance must cross several membranes in order to go from one area of the body to another. In essence, for a substance to move through one cell requires that it first move across the cell membrane into the cell, pass across the cell, and then cross the cell membrane again in order to leave the cell.

This is true whether the cells are in the skin, the lining of a blood vessel, or an internal organ (for example, the liver). In many cases, in order for a substance to reach its site of toxic action, it must pass through several membrane barriers. Cell membranes surround all body cells and are basically similar in structure. They consist of two layers of phospholipid molecules arranged like a “sandwich” and also known as “phospholipid bilayer”. Each phospholipid molecule consists of a phosphate head and a lipid tail. The phosphate head is polar so it is hydrophilic (attracted to water).

In contrast, the lipid tail is lipophilic (attracted to lipid-soluble substances). The two phospholipid layers are oriented on opposing sides of the membrane so that they are approximate mirror images of each other. The polar heads face outward and the lipid tails inward. The cell membrane is tightly packed with these phospholipid molecules—interspersed with various proteins and cholesterol molecules. Some proteins p across the entire membrane providing for the formation of aqueous channels or pores. Some toxicants move across a membrane barrier with relative ease while others find it difficult or impossible.

Those that can cross the membrane, do so by one of two general methods: either passive transfer or facilitated transport. Passive transfer consists of simple diffusion (or osmotic filtration) and is “passive” in that there is no requirement for cellular energy or assistance. Some toxicants cannot simply diffuse across the membrane. They require assistance that is facilitated by specialized transport mechanisms. The primary types of specialized transport mechanisms are:

  • Facilitated diffusion
  • Active transport
  • Endocytosis (phagocytosis and pinocytosis). Passive transfer is the most common way that xenobiotics cross cell membranes.

Two factors determine the rate of passive transfer:

  • Differences in concentrations of the substance on opposite sides of the membrane (substance moves from a region of high concentration to one having a lower concentration. Diffusion will continue until the concentration is equal on both sides of the membrane); and
  • Ability of the substance to move either through the small pores in the membrane or through the lipophilic interior of the membrane. Properties of the chemical substance that affect its ability for passive transfer are:
  • Lipid solubility
  • Molecular size
  • Degree of ionization (that is, the electrical charge of an atom) Substances with high lipid solubility readily diffuse through the phospholipid membrane. Small water-soluble molecules can pass across a membrane through the aqueous pores, along with normal intracellular water flow. Large water-soluble molecules usually cannot make it through the small pores, although some may diffuse through the lipid portion of the membrane, but at a slow rate. In general, highly ionized chemicals have low lipid solubility and pass with difficulty through the lipid membrane.

Most aqueous pores are about 4 angstrom (A) in size and allow chemicals of molecular weight 100-200 to pass through. Exceptions are membranes of capillaries and kidney glomeruli that have relatively large pores (about 40A) that allow molecules up to a molecular weight of about 50,000 (molecules slightly smaller than albumen which has a molecular weight of 60,000) to pass through. Facilitated diffusion is similar to simple diffusion in that it does not require energy and follows a concentration gradient. The difference is that it is a carrier-mediated transport mechanism.

The results are similar to passive transport but faster and capable of moving larger molecules that have difficulty diffusing through the membrane without a carrier. Examples are the transport of sugar and amino acids into red blood cells (RBCs), and into the central nervous system (CNS). Some substances are unable to move with diffusion, unable to dissolve in the lipid layer, and are too large to pass through the aqueous channels. For some of these substances, active transport processes exist in which movement through the membrane may be against the concentration gradient: they move from low to higher concentrations.

Cellular energy from adenosine triphosphate (ADP) is required in order to accomplish this. The transported substance can move from one side of the membrane to the other side by this energy process. Active transport is important in the transport of xenobiotics into the liver, kidney, and central nervous system and for maintenance of electrolyte and nutrient balance. Many large molecules and particles cannot enter cells via passive or active mechanisms. However, some may enter, by a process known as endocytosis. In endocytosis, the cell surrounds the substance with a section of its cell wall.

This engulfed substance and section of membrane then separates from the membrane and moves into the interior of the cell. The two main forms of endocytosis are phagocytosis and pinocytosis. In phagocytosis (cell eating), large particles suspended in the extracellular fluid are engulfed and either transported into cells or are destroyed within the cell. This is a very important process for lung phagocytes and certain liver and spleen cells. Pinocytosis (cell drinking) is a similar process but involves the engulfing of liquids or very small particles that are in suspension within the extracellular fluid.

Gastrointestinal Tract The gastrointestinal tract (GI tract, the major portion of the alimentary canal) can be viewed as a tube going through the body. Its contents are considered exterior to the body until absorbed. Salivary glands, the liver, and the pancreas are considered accessory glands of the GI tract as they have ducts entering the GI tract and secrete enzymes and other substances. For foreign substances to enter the body, they must pass through the gastrointestinal mucosa, crossing several membranes before entering the blood stream.

Substances must be absorbed from the gastrointestinal tract in order to exert a systemic toxic effect, although local gastrointestinal damage may occur. Absorption can occur at any place along the entire gastrointestinal tract. However, the degree of absorption is strongly site dependent. Three main factors affect absorption within the various sites of the gastrointestinal tract:

Type of cells at the specific site

Period of time that the substance remains at the site  pH of stomach or intestinal contents at the site.

Under normal conditions, xenobiotics are poorly absorbed within the mouth and esophagus, due mainly to the very short time that a substance resides within these portions of the gastrointestinal tract. There are some notable exceptions. For example, nicotine readily penetrates the mouth mucosa. Also, nitroglycerin is placed under the tongue (sublingual) for immediate absorption and treatment of heart conditions. The sublingual mucosa under the tongue and in some other areas of the mouth is thin and highly vascularized so that some substances will be rapidly absorbed.

The stomach, having high acidity (pH 1-3), is a significant site for absorption of weak organic acids, which exist in a diffusible, nonionized and lipid-soluble form. In contrast, weak bases will be highly ionized and therefore are absorbed poorly. Chemically, the acidic stomach may break down some substances. For this reason those substances must be administered in gelatin capsules or coated tablets, that can pass through the acidic stomach into the intestine before they dissolve and release their contents. Another determinant that affects the amount of a substance that will be absorbed in the stomach is the presence of food.

Food ingested at the same time as the xenobiotic may result in a considerable difference in absorption of the xenobiotic. For example, the LD50 for Dimethline (a respiratory stimulant) in rats is 30 mg/kg (or 30 parts per million) when ingested along with food, but only 12 mg/kg when it is administered to fasting rats. The greatest absorption of chemicals, as with nutrients, takes place in the intestine, particularly in the small intestine (see Figure 9). The intestine has a large surface area consisting of outward projections of the thin (one-cell thick) mucosa into the lumen of the intestine (the villi).

This large surface area facilitates diffusion of substances across the cell membranes of the intestinal mucosa. Since the intestinal pH is near neutral (pH 5-8), both weak bases and weak acids are nonionized and are usually readily absorbed by passive diffusion. Lipid soluble, small molecules effectively enter the body from the intestine by passive diffusion. In addition to passive diffusion, facilitated and active transport mechanisms exist to move certain substances across the intestinal cells into the body, including such essential nutrients as glucose, amino acids and calcium.

Also, strong acids, strong bases, large molecules, and metals (and some important toxins) are transported by these mechanisms. For example, lead, thallium, and paraquat (herbicide) are toxicants that are transported across the intestinal wall by active transport systems. The high degree of absorption of ingested xenobiotics is also due to the slow movement of substances through the intestinal tract. This slow passage increases the length of time that a compound is available for absorption at the intestinal membrane barrier. Intestinal microflora and gastrointestinal enzymes can affect the toxicity of ingested substances.

Some ingested substances may be only poorly absorbed but they may be biotransformed within the gastrointestinal tract. In some cases, their biotransformed products may be absorbed and be more toxic than the ingested substance. An important example is the formation of carcinogenic nitrosamines from non-carcinogenic amines by intestinal flora. Very little absorption takes place in the colon and rectum. As a general rule, if a xenobiotic has not been absorbed after passing through the stomach or small intestine, very little further absorption will occur.

However, there are some exceptions, as some medicines may be administered as rectal suppositories with significant absorption. An example, is Anusol (hydrocortisone preparation) used for treatment of local inflammation which is partially absorbed (about 25%). Respiratory Tract Many environmental and occupational agents as well as some pharmaceuticals are inhaled and enter the respiratory tract. Absorption can occur at any place within the upper respiratory tract. However, the amount of a particular xenobiotic that can be absorbed at a specific location is highly dependent upon its physical form and solubility.

There are three basic regions to the respiratory tract:

  • Nasopharyngeal region
  • Tracheobronchial region
  • Pulmonary region

By far the most important site for absorption is the pulmonary region consisting of the very small airways (bronchioles) and the alveolar sacs of the lung. The alveolar region has a very large surface area (about 50 times that of the skin). In addition, the alveoli consist of only a single layer of cells with very thin membranes that separate the inhaled air from the blood stream. Oxygen, carbon dioxide and other gases pass readily through this membrane.

In contrast to absorption via the gastrointestinal tract or through the skin, gases and particles, which are water-soluble (and thus blood soluble), will be absorbed more efficiently from the lung alveoli. Water-soluble gases and liquid aerosols can pass through the alveolar cell membrane by simple passive diffusion. In addition to solubility, the ability to be absorbed is highly dependent on the physical form of the agent (that is, whether the agent is a gas/vapor or a particle). The physical form determines penetration into the deep lung.

A gas or vapor can be inhaled deep into the lung and if it has high solubility in the blood, it is almost completely absorbed in one respiration. Absorption through the alveolar membrane is by passive diffusion, following the concentration gradient. As the agent dissolves in the circulating blood, it is taken away so that the amount that is absorbed and enters the body may be quite large. The only way to increase the amount absorbed is to increase the rate and depth of breathing. This is known as ventilation-limitation.

For blood-soluble gases, equilibrium between the concentration of the agent in the inhaled air and that in the blood is difficult to achieve. Inhaled gases or vapors, which have poor solubility in the blood, have quite limited capacity for absorption. The reason for this is that the blood can become quickly saturated. Once saturated, blood will not be able to accept the gas and it will remain in the inhaled air and then exhaled. The only way to increase absorption would be to increase the rate of blood supply to the lung.

This is known as flow-limitation. Equilibrium between blood and the air is reached more quickly for relatively insoluble gases than for soluble gases. The absorption of airborne particles is usually quite different from that of gases or vapors. The absorption of solid particles, regardless of solubility, is dependent upon particle size. Large particles (>5 µM) are generally deposited in the nasopharyngeal region ((head airways region) with little absorption. Particles 2-5 µM can penetrate into the tracheobronchial region.

Read more

Air Pollution Essay 24

Air pollution is the introduction of chemicals, particulate matter, or biological materials that cause harm or discomfort to humans or other living organisms, or damages the natural environment, into the atmosphere. The atmosphere is a complex, dynamic natural gaseous system that is essential to support life on planet Earth. Stratospheric ozone depletion due to air pollution has long been recognized as a threat to human health as well as to the Earth’s ecosystems.

History Humans probably first experienced harm from air pollution when they built fires in poorly ventilated caves. Since then we have gone on to pollute more of the earth’s surface. Until recently, environmental pollution problems have been local and minor because of the Earth’s own ability to absorb and purify minor quantities of pollutants. The industrialization of society, the introduction of motorized vehicles, and the explosion of the population, are factors contributing toward the growing air pollution problem. At this time it is urgent that we find methods to clean up the air.

The primary air pollutants found in most urban areas are carbon monoxide, nitrogen oxides, sulfur oxides, hydrocarbons, and particulate matter (both solid and liquid). These pollutants are dispersed throughout the world’s atmosphere in concentrations high enough to gradually cause serious health problems. Serious health problems can occur quickly when air pollutants are concentrated, such as when massive injections of sulfur dioxide and suspended particulate matter are emitted by a large volcanic eruption. Air Pollution in the Home

You cannot escape air pollution, not even in your own home. “In 1985 the Environmental Protection Agency (EPA) reported that toxic chemicals found in the air of almost every American home are three times more likely to cause some type of cancer than outdoor air pollutants”. (Miller 488) The health problems in these buildings are called “sick building syndrome”. “An estimated one-fifth to one-third of all U. S. buildings are now considered “sick”. (Miller 489) The EPA has found that the air in some office buildings is 100 times more polluted than the air outside.

Poor ventilation causes about half of the indoor air pollution problems. The rest come from specific sources such as copying machines, electrical and telephone cables, mold and microbe-harboring air conditioning systems and ducts, cleaning fluids, cigarette smoke, carpet, latex caulk and paint, vinyl molding, linoleum tile, and building materials and furniture that emit air pollutants such as formaldehyde. A major indoor air pollutant is radon-222, a colorless, odorless, tasteless, naturally occurring radioactive gas produced by the radioactive decay of uranium-238. According to studies by the EPA and the National Research Council, exposure to radon is second only to smoking as a cause of lung cancer”. (Miller 489)  Radon enters through pores and cracks in concrete when indoor air pressure is less than the pressure of gasses in the soil. Indoor air will be healthier than outdoor air if you use an energy recovery ventilator to provide a consistent supply of fresh filtered air and then seal air leaks in the shell of your home.

Air pollution has unhealthy effects on people, animals and plant-life across the globe. Every time we inhale, we carry dangerous air pollutants into our bodies. These pollutants can cause short-term effects such as eye and throat irritation. More alarming, however, are the long-term effects such as cancer and damage to the body’s immune, neurological, reproductive and respiratory systems. Acid Rain is a significant air pollution problem that affects rural, suburban and urban areas that are down-wind of major industrial areas.

Acid rain is caused when sulfur and nitrogen pollution from industrial smokestacks is combined with moisture in the atmosphere. The resulting rain is acidic which destroys natural ecosystems ands buildings. Global Warming, as pollution gathers in the Earth’s atmosphere, it traps heat and causes average temperatures to rise. It is hard to predict exactly how climate change will affect a particular area. Here are a few likely results:

  • A rise in sea level between 3. 5 and 34. 6 in. 9-88cm) leading to more coastal erosion, flooding during storms and permanent inundation
  • Severe stress on many forests, wetlands, alpine regions, and other natural ecosystems
  • Greater threats to human health as mosquitoes and other disease-carrying insects and rodents spread diseases over larger geographical regions
  • Disruption of agriculture in some parts of the world due to increased temperature, water stress and sea-level rise in low-lying areas such as Bangladesh or the Mississippi River delta.

Read more

Proteomics

Proteomics is a study of the proteome of an organism. The last few decades have seen a rapid progress in the development of this field. This paper attempts to compare and contrast the way in which proteomics studies are performed today as opposed to those performed ten years ago and analyse its future implications. The thrust of research while studying biology at a molecular level initially was focused specifically on the genomes of various organisms.

As scientists discovered the intricacies of genes and their functionalities, the attention was soon drawn towards the end result of the central dogma of molecular biology, namely, the proteins, produced through translation of RNAs. Therefore, to study the proteins produced in an organism, referred to as the proteome, not just as products of a genome, but more importantly how they interact and bring about changes at the macro level, the field of proteomics has emerged.

Proteins play a pivotal role in carrying out various functions in a body at the structural and dynamic levels. Proteins as enzymes and hormones regulate the vital metabolic processes and as structural components provide stability to the cellular components. The knowledge obtained through the study of these systems gives an insight into the overall functioning of the living organisms. In spite of having similar genetic blue prints, the protein expression in various organisms are regulated differently through diverse networks of protein-protein interactions.

Hence, proteomics provides an understanding about these regulatory processes and establishes the differences and similarities between the evolutionary pathways of the organisms by grouping them under phylogentic trees. Further, drugs can be developed for specific diseases by designing structural analogues of proteins responsible for diseased conditions after elucidating their structures, which can then up or down regulate metabolic processes.

Thus, the study of proteins plays an essential part of researches carried out in other related fields of study such as developmental and evolutionary biology and drug designing. Since the invention of the 2-Dimentional Gel Electrophoresis in the 1970s, which is considered to be the stepping stone of modern day protein studies, scientists have been constantly striving to develop new and potent methods to study proteomics.

Thus, this paper is an attempt to identify and compare these techniques which have been used and improved over the last decade. The popular and preferred procedure to study the proteome of an organism comprises of three major steps, isolation, separation on 2-D gel and analysis through a mass spectrometer. Most of the improvements revolve around this basic protocol. 2-D gel electrophoresis was one of the first methods which were used to analyse the proteome of an organism. In this technique, the protein is separated on the basis of its charge and size.

The proteins are first separated on the basis of their different charges in the 1st dimension, following which they are separated on the 2nd dimension on the basis of their molecular weight. The gel or map provides a graphical representation of each protein after separation and hence they can be distinguished individually. However, the reproducibility of the results obtained through such an analysis has not been satisfactory. Till date there are constant efforts being made to improve the efficacy of this technique, such that a large number of proteins could be separated at the same time.

The first 2-D separation which was carried out by using the electrophoresis buffer and starch gel, the improvements which followed gave rise to the foundation of modern day 2-D separation, which was combining two 1-d techniques involving separation on the basis of pH using isoelectric focusing (IEF) and using SDS-Page for separation on the basis of molecular weight after the samples have been prepared specifically using various reagents such as Urea (as a chaotrope to solubilise) and DTT (to break di-sulphide linkages without fragmentation into peptides), in a suitable buffer .

Further, for certain segments of proteins which were hydrophobic in nature, like those found in the cell membrane, it was discovered that special reagents such as thiourea, sulfobetaine and tributyl phosphine which are classified as chaotropes, surfactants and reducing agents respectively, assisted their solubility during sample preparation before running them on the gel. Another notable extension of 2-D separation was the use of IPG strips, which had different pH gradients. These strips were made available commercially and drastically contributed to the convenience of the technique.

Also, experiments were carried out using a number of such strips to increase the range of pH, hence successfully accommodating a large number of proteins. Nevertheless, such a method, although successful, was human-error prone and hence the results on the varied from each other in majority of cases. To overcome this, a number of replicates of the gel had to be prepared and therefore demanded a lot of labour. To overcome this barrier, the differential gel electrophoresis technique DIGE was developed. In this method, the proteins are labelled with fluorescent dyes prior to electrophoresis.

The fluorophores are joined via an amide linkage to the amino acid lysine and therefore the proteins can be resolved together on the same gel through distinguished patterns of fluorescent emissions . Further advancement of the standard 2-D gel analysis was to incorporate automation to the technology, however the room for automation to analyse the results was limited due to the inability of a computer to distinguish between the different patterns. Differentiating a spot of protein on a gel, its intensity and to separate it from a background still remains an overwhelming task for the computer.

The next step in proteome analysis is protein identification using mass spectrometry (MS). One of the most compelling problems of using MS to study biomolecules such as proteins was the inability to obtain ions of sufficiently large size which would effectively lead to their identification. Since the development of Electron Spray ionization and MALDI (Matrix assisted Laser Desorption Ionization) this drawback of MS was overcome and today the combination of these ion sources with different mass analysers e. g. MALDI-TOF/TOF, ESI Q-TOF and ESI triple quardrupoles are used widely in proteomics. Identification of a protein is carried out through a process referred to as peptide mass fingerprinting (PMF). In this technique, proteins that have been separated on a 2-D gel are excised and digested into peptides using proteases such as trypsin. The digested peptides, when subjected to study in a MS, give a characteristic m/z spectrum. The protein can be indentified when this data correlates to the data in protein databases; compared using softwares based specific algorithms.

However, to extrapolate a proteins role in metabolism, it is also necessary to identify how the protein is modified after translation. Post translation modification plays an important role in acting like a regulating switch; modifications such as phosphorylation play an important in processes such as cell signalling. The main drawback while analysing a phosphorylated protein through MS was its signal suppression. To rectify this issue, high performance separation techniques such as HPLC were conjugated with the MS; LC-MALDI-MS is an example of such a combination .

Further extension of the protein mass fingerprinting was the development of shotgun proteomics, to specifically do away with the disadvantages of a standard 2-D gel analysis. This technique is based on separation of peptides obtained after protease digestion, using multidimensional chromatography. It is necessary that the two dimension of this multidimensional separation done using chromatography are orthogonal in nature, i. e. using two different properties of a protein similar to a 2-D gel separation which uses pI and mass.

Separating proteins using reversed phase, based on hydrophobicity, and Strong cation exchange, using the charged state of the peptides is an example of separation in two dimensions. Although the PMF approach provided a successful identification process to recognize the proteins present in a proteome, it was also necessary to study the changes in protein expression in response to a stimulus. To achieve this, the technique call the ICAT was developed which protein mixtures from after isolation were modified such that they can differentiated on the basis of mass from one cellular location to another.

In ICAT, this modification is done using a cysteine with an isotope labelled biotin tag. Today, the efforts to develop new technologies are directed towards automation in sample preparation and effective interfacing with other techniques. Interfacing has been achieved more successfully with ESI than MALDI owing to its ability of operating with a continuous flow of liquid. Sample from organisms contain thousands of proteins, to effectively separate certain important proteins such as disease biomarkers from this mixture, is a highly demanding task.

Further, effective proteolytic digestion can be challenging when the proteins of interest are present in low quantities. Therefore, before a sample of protein can be effectively analysed there are a number of steps to be performed which are prone to human error and are laborious. The development of Micro-fluidic system as an interface with the mass spectrometer such as ESI provides the option of automating this process and hence making proteome analysis more effective less time-consuming.

Therefore, such a chip based technology has a clear advantage over the traditionally used methods due its improved probability of obtaining the protein of interest, reduced consumption of reagents and accelerated reaction time. The micro fluidic chips can be directly coupled to an ESI- MS using a pressure-driven or electro-osmotic flow. Thus, such a system where there is a direct interface is called an on-line setup. On the other hand, such a setup cannot be achieved in MALDI where a mechanical bridge is created between the micro-fluidic chip and the Mass spectrometer.

The first step of a proteome analysis, i. e. sample purification is carried out using a hydrophobic membrane integrated into an inlet channel of a polyimide chip. Separation of proteins from the sample can be achieved either using a capillary electrophoresis (CE) or a liquid chromatographic (LC) method. CE is usually preferred over LC due as it provides a faster separation and can be coupled to an electric pump. Proteolytic digestion is carried out on the solid surface of the chips, where the enzymes are immobilized.

Thus, such a chip provides a platform for the automation of the initial steps of a proteomic study, and more studies are still being performed to increase the efficacy of this approach . To conclude, over the last decade, there has been a rapid progress in the techniques used to study proteomics. The direction of progress has also shed a light on the importance of proteomics and the implications if would have in the coming years. Studies on evolution have benefitted a great deal with the development of techniques like ICAT which enhances quantitative and comparative studies of the different proteomes.

In the field of medicine and drug discovery, the application of these techniques, paves the road for discovery of novel biomarkers for specific diseases in a quicker and less complicated manner. Further, it would also assist vaccine development by identifying specific antigens for a disease. The developments of micro-fluidic chips have opened the door for new diagnostics techniques by characterizing effectively the protein responsible for a diseased state. Such an approach has already been employed to study the proteins produced in the body in a cancerous state. Therefore, as more researchers and academics adapt these with these applications, many more improvements would soon evolve.

References

  1. Anderson, L. , Matheson, A. and Steiner, S. (2000). “Proteomics: applications in basic and applied biology. ” Current Opinion in Biotechnology Vol: 11:pp. 408–412.
  2.  Pazos, F. and Valencia, A. (2001). “Similarity of phylogenetic trees as indicator of protein protein interaction. ” Protein Engineering Vol: 14: no 9: pp. 609-614.
  3. Klose, J. (2009). From 2-D electrophoresis to proteomics. ” Electrophoresis Vol: 30: pp. 142–149. 4
  4. Herbert, B. (1999). “Advances in protein solubilisation for two-dimensional electrophoresis. ” Electrophoresis Vol: 20: pp. 660- 663.
  5. Alban, A. , David, S. , Bjorkesten, L. , Andersson, C. , Sloge, E. , Lewis, S. and Currie, I. (2003). “A novel experimental design for comparative two-dimensional gel analysis: Two-dimensional difference gel electrophoresis incorporating a pooled internal standard. Proteomics Vol: 3: pp. 36–44.
  6. Reinders, J. , Lewandrowski, U. , Moebius, J. , Wagner, Y. and Sickmann, A. (2004). “Challenges in mass spectrometry based proteomics. ” Proteomics Vol: 4: pp. 3686–3703.
  7. Swanson, S. and Washburn, M. (2005). “The continuing evolution of shotgun proteomics. ” Drug Discovery Today Vol: 10.
  8. Lee, J. , Sopera, S. and Murraya, K. (2009). “Microfluidic chips for mass spectrometry-based proteomics. ” Journal of Mass Spectrometry Vol: 44: pp. 579–593.

Read more

Enzymes Laboratory Report

ENZYMES LABORATORY REPORT Introduction The utilization of any complex molecule for energy by an organism is dependent on a process called hydrolysis. Hydrolysis breaks complex molecules into simpler molecules using water. Similarly, the process that is the reverse of this is called dehydration synthesis, which removes water from simpler molecules. However, because hydrolysis occurs very slowly, living organisms use biochemical’s called enzymes to speed up the reaction.

In this lab exercise, we studied the nature of enzyme actions using live yeast cells as our source of sucrose. The enzyme will then break the sucrose into one molecule of glucose and fructose. Because sucrose is a large molecule that cannot enter most cells, yeast will produce sucrase and secrete it into cell membranes. The sucrose will be hydrolyzed into small six-carbon monosaccharide’s which can enter into the cell membranes. The sucrose can be obtained from a 0. 5 percent solution of “dry baker’s yeast in water”.

In parts A and B, the experiment will study the optimal temperature under which the yeast cells degrade sucrose using varying pH and temperature of the environment surrounding the yeast cells. Part C will study the effects of extreme heat on enzyme activity and part D will focus on the saturation point for enzymes using varying substrate concentrations. Materials and Procedure See pg 79-82 section: Enzymes “Experiments in Biology from Chemistry to Sex” Fifth Edition By Linda R. Van Thiel Results In test A. ffect of pH, the results we obtained for tube #1 was a solution color of orange and a color activity of 3. For #2 was also orange and color activity of 3. For #3 was orange and a color activity of 3, for #4 was green and a color activity of 1, and finally for #5 was blue and a color activity of 0. From our results, it shows the optimum pH is tube # 1-3. The control in this experiment was test tube 3A, with a pH of 7, as this pH was neutral. In test B. effects of temperature, the optimum temperature is shown on our graph to be two different points (either 24 or 60 degrees).

For our results we received a solution color of blue for tube 1, and a color activity of 0. For tube 2, we received a solution color of orange and a color activity of 3. For tube 3, we received a solution color of green and a color activity of 1, for tube 4; we received a solution color of orange and a color activity of 3. Finally, for test tube 5, we received a solution color of blue, and a color activity of 0. The highest rates of activity were found in test tubes 2 and 4.

The control in this experiment was test tube number 2, which was kept in the temperature environment of 24 ???? C [room temperature]. In test C. Effect of Denaturation, the boiled sucrose and sucrase received slightly lowered color activities than the non-boiled tube. Shown on graph 8. 3, the graph begins with no movement in rate of activity followed by a steady increase in the color activity. The results show that test tube 1, which was boiled sucrase and sucrose, had a solution color of green and a color activity of 1.

Test tube 2, which contained boiled sucrase had a solution color of green and a color activity of 1, test tube 3, which contained boiled sucrose, had a solution color of orange and a color activity of 3, finally test tube 4, which was neither boiled, had a solution color of red, and a color activity of 4. From the results, the neither boiled tube had the highest color activity. The control in this experiment was test tube 4, which was completely untouched. In test D. Effect if substrate concentration, the higher concentrations of sucrose received a higher color activity.

The graph is represented by a constant followed by a steady drop as the concentration of sucrose decreases. The results showed that in test tube 1, which contained 100% of sucrose, the solution color was red and the solution gained a color activity of 4. In test tube 2, the concentration of sucrose of 50%, and the solution color was also red, which a color activity of 4. In test tube 3, which contained 25% sucrose, the solution color was orange, and had a color activity of 3, in test tube 4, which contained 10% concentration; the solution color was green and had a color activity of 1.

In the last test tube, which had no concentration of sucrose, the solution color was blue, and had no color activity. The control in this experiment was test tube 5 which contained no sucrose at all. Discussion In the first test, the test of the effect of pH, the results show the effect of pH increases the rate of reaction as having a slightly acidic pH will increase the actual reaction while supporting a more basic pH will decrease the reaction. In our results, it shows that the pH reaches an optimum pH of 7 before decreasing.

The results are not completely accurate, as the first three tubes all had a color activity of 3. The actual results should have had a slightly higher color activity for the optimum pH (which would have been from a pH of 5-6) and a lower color activity for the starting and ending pH. Experimental error may be caused by unwashed test tubes and slightly inaccurate amounts of solution being placed into test tubes. The second test consisted of the effects of Temperature. Temperature (as represented in graph 8. 2) increases rate of reaction in the enzyme until reaching an optimum point, and then decreasing rapidly.

However, in our results, we were accurate until we reached the optimum point, (37 degrees). Instead of this being the highest point for rate of reaction, we obtained a color activity of 1. Because 37 degrees was the optimum temperature, this should have been the highest point and the highest rate of activity. However, we had an experimental error in the form of accidently placing the 3rd tube in the wrong temperature environment. The third test consisted of the effects of Denaturation. In this test, the tube that showed the highest color activity was tube four because it was not exposed to the higher temperatures.

Enzymes that are boiled, or exposed to extreme temperatures could denature the protein component thus destroying the enzyme. However, by boiling the substrate, the enzyme’s rate of reaction increases. However in our data, the first and second test tube should have contained no color activity as such extreme temperature would have already destroyed the enzyme. There could have been experimental error in the length it took to boil the test tube as it may not have reached its required amount. The last test consisted of the Effects of Sucrose Concentration.

By increasing the amount of substrate, the rate of reaction will also increase as it is more likely that substrate molecules are closer to an enzyme molecule. However, this is only true to a certain limit as demonstrated in the chart. Both test tube 1 (which contains 100% of sucrose) and test tube 2 (which contains 50% of sucrose) have the same color activity despite the significant difference in concentration. This is because the concentration of substrates has reached an approximate saturation point, which is seen in this enzymatic reaction to be 50%.

Read more

Combining Vinegar and Baking Soda- Lab Report

The second trial displayed similar results of a weak presence of carbon dioxide. The solution bubbled up, the cork stayed stationary in the top of the bottle, but the solid did not dissolve completely. A third trial was performed in which we decided to increase the amount of vinegar used. The indicator which triggered this decision was the resulting solid at the bottom of vessel. In the third trial we kept the baking soda our constant at % TTS. ND deed 2 ounces of vinegar. The results remained similar to trial two. The solution bubbled, the cork remained stationary In the top of the bottle, and there remained solid In the bottom of the vessel. A fourth trial was performed In which we again Increased the amount of vinegar added to dissolve the solid. In the fourth trial we kept the baking soda our constant at % TTS. And added 3 ounces of vinegar. The results Improved slightly as we saw that, although the cork remained stationary, the solution bubbled substantially higher In the bottle displaying a stronger presence of arbor dioxide.

It was also noted that the solid that remained was much less than In past trials. A fifth trial was performed in which we again increased the amount of vinegar added to dissolve the solid. In the fifth trial we kept the baking soda our constant at TTS. And added 4 ounces of vinegar. The results dramatically changed. The solution bubbled almost immediately and so quickly that the solution overflowed that there was more than enough baking soda, there may have been too much vinegar added and that the technique of the pour may have been too slow or the exults may have been different.

A sixth and final trial was done in which we kept the baking soda our constant at % TTS. And reduced the amount of vinegar poured to approximately 3 h ounces. The pour was done more quickly and the bubbling reaction took place almost immediately. The cork was placed in the bottle after the overflowing had started to occur so the reaction of the cork popping still was not quite achieved, however the last trial did show a large amount of carbon dioxide present. The data from each trial is recorded in the table below on the following page.

In order to study he reaction we created trials which would allow the chemicals to combine within a vessel. The movement or lack of movement from the cork allowed us to measure the amount of carbon dioxide present in each experiment. My results showed the trial with the greatest reaction was the final trial because the solution bubbled almost more than the other trials. If the cork had been placed inside of the bottle quicker or if the pour had been slightly slower the cork would have popped with stronger force.

The trial with the least reaction was trial one because the solution bubbled the least wowing a weak presence of carbon dioxide, the solid dissolved completely and the cork remained completely stationary showing there was very little pressure within the vessel. While observing the experiment, I noticed that the more vinegar added and the quicker the pour the greater the reaction and the more the solution bubbled. In order to further investigate the experiment, next time I would try the experiment utilizing only one student performing the pour to keep consistency of the control of the pour and the pressure being applied to the cork.

Read more

Global Warming: Causes and Effects

Global Warming: Causes and Effects The term “global warming” is often used synonymously with the term climate change, but the two terms have distinct meanings. Global Warming is a gradual increase in the earth’s temperature. Novdia explained that “global warming refers to the documented historical warming of the Earth’s surface, based upon worldwide temperature records that have been maintained by humans since the 1880s” (Global Warming). Global warming is a major crisis in the world today. Three causes of global warming are the greenhouse effect, increase of carbon dioxide emission, and the effects of temperature increase.

The greenhouse effect is a primary cause of global warming. It is a gradual rise in temperature in the earth’s atmosphere due to the heats absorption from the sun and entrapment of gases, water vapor, carbon dioxide, methane, in the air around the earth. An example of this is a bright sunlight will effectively warm an individual’s car on a cold, clear day by the greenhouse effect. The longer infrared wavelengths radiated by sun-warmed objects do not pass readily through the glass. The trap of this energy warms the interior of the vehicle.

The trapping of the hot air so that it cannot rise and lose the energy by convection also plays a major role. Stephen Novdia stated, “The greenhouse effect was discovered by Joseph Fourier in 1824, first reliably experimented on by John Tyndall in 1858, and first reported quantitatively by Svante Arrhenius in 1896” (Global Warming). Greenhouse effect then becomes a primary cause of global warming. Carbon dioxide (CO2) emissions cause global warming. It is gas in the earth’s atmosphere that came from fossil fuels. Fossil fuels are formed by natural resources and are decomposed from buried dead organisms.

A study on Carbon dioxide emissions in a magazine article on May 2009 found the following evidence: “In the 19th century, scientists realized that gases in the atmosphere cause a “greenhouse effect” which affects the planet’s temperature. [The planet’s temperature will increase dramatically by the gases in the atmosphere absorbing the heat. ] These scientists were interested chiefly in the possibility that a lower level of carbon dioxide gas might explain the ice ages of the distant past. At the turn of the century, Svante Arrhenius calculated that emissions from human industry might someday bring a global warming.

Other scientists dismissed his idea as faulty. In 1938, G. S. Callendar argued that the level of carbon dioxide was climbing and raising [the] global temperature, but most scientists found his arguments implausible. It was almost by chance that a few researchers in the 1950s discovered that global warming truly was possible. In the early 1960s, C. D. Keeling measured the level of carbon dioxide in the atmosphere: it was rising fast. Researchers began to take an interest, struggling to understand how the level of carbon dioxide had changed in the past, and how the level was influenced by chemical and biological forces.

They found that the gas plays a crucial role in climate change, so that the rising level could gravely affect our future. ” (Weart) This magazine article influence Americans to recycle because the article is geared to them. With the CO2 causing the increase in weather, it also causes the increase of the greenhouse effect. The CO2 can affect the climate change and the future of the world if the greenhouse effect increases. Causes of global warming exist and so do the effects. Revkin stated, “Americans lead in moving to a world where ‘fossil fuels have been largely modified for carbon recycling or replaced by carbon-neutral alternatives. ” (Challenges to Both Left and Right on Global Warming) A result of increase of weather are polar ice caps and temperature rising. When the temperature increases, the polar ice caps will melt, causing glaciers to melt as well. Glaciers are made up of fresh frozen water and when they melt into the ocean, which is composed of salt water, the ocean currents will be altered. The result of ocean currents altering will also affect the species that live in the ocean. Some species live in salt water and others live in fresh water, but cannot live in both. The increase of water in the ocean will cause evaporation to increase.

The increase of probability and intensity of droughts and heat waves will increase as well. With all the water in the ocean and the weather increasing, it will cause warm water and more hurricanes. Global warming has both causes and effects. It is one of the most major crisis that will affect the world today, yet the most difficult to resolve. If I had a choice, I would make everybody recycle and you use resources. I would also have Americans reduce the Carbon emissions. Even if we stopped emitting greenhouse gases today, the Earth would still be warm by some kind of degree Fahrenheit.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp