Solenoids – Physics coursework

Solenoids
Permanent Magnet- Magnetic Field.

http://www.diracdelta.co.uk/science/source/m/a/magnetic%20field/source.html Magnets have two poles called North and South.

Similar (like) magnetic poles repel. Unlike magnetic poles attract. A magnet attracts a piece of iron. The most important of the two properties of attraction and repulsion is repulsion. The only way to tell if an object is magnetised is to see if it repels another magnetised object. The strength and direction of a magnetic field is represented by magnetic field lines. Field lines by convention go from North to South. A magnetic field is three-dimensional, although this is not often seen on a drawing of magnetic field lines.

Electromagnets
A magnetic field exists around all wires carrying a current. When there is no current the compass needles in the diagram shown line up with the Earth’s magnetic field. A current through the wire produces a circular magnetic field. See what happens when there is a current in the wire. The magnetic field for a coil of wire is shown below. The magnetic fields from each of the turns in the coil add together, so the total magnetic field is much stronger. This produces a field which is similar to that of a bar magnet. A coil of wire like this is often called a solenoid.

http://www.bbc.co.uk/bitesize/standard/physics/using_electricity/movement_from_electricity/revision/1/slideshow-1/2/

An electromagnet consists of a coil of wine, through which a current can be passed, wrapped around a soft iron core. This core of magnetic material increases the strength of the field due to the coil. ‘Soft’ iron is easily magnetised, and easy to demagnetise- it does not retain its magnetism after the current is switched off. Steel, on the other hand, is hard to magnetise and demagnetise, and so it retains in magnetism. It is used for permanent magnets. The strength of an electromagnet depends on:

The size of the current flowing through the coil
The number of turns in the coil
The material inside of the coil
Heinmann physics
Domains – http://hyperphysics.phy-astr.gsu.edu/hbase/solids/ferro.html#c4

Ferromagnetic materials exhibit a long-range ordering phenomenon at the atomic level which causes the unpaired electron spins to line up parallel with each other in a region called a domain. Within the domain, the magnetic field is intense, but in a bulk sample the material will usually be unmagnetized because the many domains will themselves be randomly oriented with respect to one another. The main implication of the domains is that there is already a high degree of magnetization in ferromagnetic materials within individual domains, but that in the absence of external magnetic fields those domains are randomly oriented. A modest applied magnetic field can cause a larger degree of alignment of the magnetic moments with the external field, giving a large multiplication of the applied field.

Ferromagnetism

Iron, nickel, cobalt and some of the rare earths (gadolinium, dysprosium) exhibit a unique magnetic behavior which is called ferromagnetism because iron (ferrum in Latin) is the most common and most dramatic example.

Read more

Principles Of Operation Engines

Table of contents

Summary

This chapter gives a background to the principles behind the operation of dc motors and stepper motors.

Permanent magnet, shunt, separately excited, series and compound wound dc motor connections are described. A description of the equations behind the basic behavior of these machines is given and the torque vs speed and speed vs armature (voltage and current) characteristics are illustrated, which gives a background to the control of these motors. U SA NE M SC PL O E– C EO H AP LS TE S R S 1. Introduction Electrical machinery has been in existence for many years. The applications of electrical machines have expanded rapidly since their first use many years ago.

At the present time, applications continue to increase at a rapid rate. The use of electrical motors has increased for home appliances and industrial and commercial applications for driving machines and sophisticated equipment. Many machines and automated industrial equipment require precise control. Direct current motors are ideal for applications where speed and torque control are required. Direct current motor design and complexity has changed from early times where dc machines were used primarily for traction applications.

Direct current motors are used for various applications ranging from steel rolling mills to tiny robotic systems. Motor control methods have now become more critical to the efficient and effective operation of machines and equipment. Such innovations as servo control systems and industrial robots have led to new developments in motor design. Our complex system of transportation has also had an impact on the use of electrical machines. Automobiles and other means of ground transportation use electrical motors for starting and generators for their battery-charging systems.

Recently there have been considerable developments in electric vehicles and also in hybrid electric vehicles which use a combination of a dc motor and an internal combustion engine for efficient operation. In this chapter machines driven by dc electrical supplies are considered. Since the operation of this type of machine is based upon the flow of current in conductors and their interaction with magnetic fields, common principles that underlie the behavior of dc machines will be examined first.

 Magnetism and Electromagnetic Principles

Magnetism and electromagnetic principles are the basis of operation of rotating electrical machines and power systems. For this reason, a review of basic magnetic and electromagnetic principles will be given. ©Encyclopedia of Life Support Systems (EOLSS) ELECTRICAL ENGINEERING – Vol. III – Direct Current Machines – Edward Spooner

Permanent Magnets

Permanent magnets are generally made of iron, cobalt, nickel or other ‘hard’ magnetic materials, usually in an alloy combination. The ends of a magnet are called north and south poles.

The north pole of a magnet will attract the south pole of another permanent magnet. A north pole repels another north pole and a south pole repels another south pole. The two laws of magnetism are: 1) Unlike poles attract (see Figure 1); 2) Like poles repel (see Figure 2). U SA NE M SC PL O E– C EO H AP LS TE S R S The magnetic field patterns when two permanent magnets are placed end to end are shown in Figures 1 and 2. When the magnets are farther apart, a smaller force of attraction or repulsion exists. A magnetic field, made up of lines of force or magnetic flux, is set up around any magnetic material.

These magnetic flux lines are invisible but have a definite direction from the magnet’s north to south pole along the outside of the magnet. When magnetic flux lines are close together, the magnetic field is stronger than when further apart. These basic principles of magnetism are extremely important for the operation of electrical machines. Figure 1: Unlike poles attract Figure 2: Like poles repel

Magnetic Field

Around Conductors Current-carrying conductors, such as those in electrical machines, produce a magnetic field. It is possible to show the presence of a magnetic field around a current-carrying conductor.

A compass may be used to show that magnetic flux lines around a conductor are circular in shape. ©Encyclopedia of Life Support Systems (EOLSS) ELECTRICAL ENGINEERING – Vol. III – Direct Current Machines – Edward Spooner A method of remembering the direction of magnetic flux around a conductor is the right-hand “cork-screw” rule. If a conductor is held in the right hand as shown in Figure 3, with the thumb pointing in the direction of current flow from positive to negative, the fingers then encircle the conductor, pointing in the direction of the magnetic flux lines. U SA NE M SC PL O E– C EO H AP LS

TE S R S Figure 3: Right-hand rule The circular magnetic field is stronger near the conductor and becomes weaker at a greater distance. A cross-sectional end view of a conductor with current flowing toward the observer is shown in Figure 4. Current flow towards the observer is shown by a circle with a dot in the centre. Notice that the direction of the magnetic flux lines is counter-clockwise, as verified by using the right-hand rule. Figure 4: Current out of the page When the direction of current flow through a conductor is reversed, the direction of the magnetic lines of force is also reversed.

The cross-sectional end view of a conductor in Figure 5 shows current flow in a direction away from the observer. Notice that the direction of the magnetic lines of force is now clockwise. Figure 5: Current into the page ©Encyclopedia of Life Support Systems (EOLSS) ELECTRICAL ENGINEERING – Vol. III – Direct Current Machines – Edward Spooner When two conductors are placed parallel to each other, and the direction of current through both of them is the same, the magnetic field lines amalgamate to become one and the two conductors attracted together. See Figure 6. Figure 6: Two parallel conductors U SA NE M SC PL O E– C EO

H AP LS TE S R S The presence of magnetic lines of force around a current-carrying conductor can be observed by using a compass. When a compass is moved around the outside of a conductor, its needle will align itself tangentially to the lines of force as shown in Figure 7. Figure 7: Field’s effect on a compass When current flow is in the opposite direction, the compass polarity reverses but remains tangential to the conductor.

Magnetic Field around a Coil

The magnetic field around one loop of wire is shown in Figure 8. Figure 8: Loop of wire ©Encyclopedia of Life Support Systems (EOLSS) ELECTRICAL ENGINEERING – Vol.

III – Direct Current Machines – Edward Spooner U SA NE M SC PL O E– C EO H AP LS TE S R S Magnetic flux lines extend around the conductor as shown when current passes through the loop. Inside the loop, the magnetic flux is in one direction. When many loops are joined together to form a coil as shown in the Figure 9, the magnetic flux lines surround the coil as shown in Figure 10. The field produced by a coil is much stronger than the field of one loop of wire. The field produced by a coil is similar in shape to the field around a bar magnet. A coil carrying current, often with an iron or steel core inside it is called an electromagnet.

The purpose of a core is to provide a low reluctance path for magnetic flux, thus increasing the flux that will be present in the coil for a given number of turns and current through the coil. Figure 9: Coil formed by loops Figure 10: Cross-sectional view of the above coil

Electromagnets

Electromagnets are produced when current flows through a coil of wire as shown below. Almost all electrical machines have electromagnetic coils. The north pole of a coil of wire is the end where the lines of force exit, while the south polarity is the end where the lines of force enter the coil.

To find the north pole of a coil, use the right-hand rule for polarity, as shown in Figure 11. Grasp the coil with the right hand. Point the fingers in the direction of current flow through the coil, and the thumb will point to the north polarity of the coil. When the polarity of the voltage source is reversed, the magnetic poles of the coil reverse. Figure 11: Finding the north pole of an electromagnet ©Encyclopedia of Life Support Systems (EOLSS) ELECTRICAL ENGINEERING – Vol. III – Direct Current Machines – Edward Spooner The poles of an electromagnet can be checked by placing a compass near a pole of the electromagnet.

The north-seeking pole of the compass will point toward the north pole of the coil.

Magnetic Strength of Electromagnets

The magnetic strength of an electromagnet depends on three factors:

  1. the amount of current passing through the coil,
  2.  the number of turns of wire, and
  3.  the type of core material.

The number of magnetic lines of force is increased by increasing the current, by increasing the number of turns of wire, by decreasing any air gap in the path of the magnetic flux, or by using a more desirable type of core material. . 6. Electromagnetic Induction U SA NE M SC PL O E– C EO H AP LS TE S R S The principle of electromagnetic induction is one of the most important discoveries in the development of modern electrical technology. Electromagnetic induction is the induction of electric voltage in an electrical circuit caused by a change in the magnetic field coupled to the circuit. When electrical conductors, such as alternator windings, are moved within a magnetic field, an electrical voltage is developed in the conductors.

The electrical voltage produced in this way is called an induced voltage. A simplified illustration showing how induced voltage is developed is shown in Figure 12. Michael Faraday developed this principle in the early nineteenth century. Figure 12: Faraday’s Law If a conductor is placed within the magnetic field of a horseshoe magnet so that the left side of the magnet has a north pole (N) and the right side has a south pole (S), magnetic lines of force travel from the north pole of the magnet to the south pole.

The ends of the conductor in Figure12 are connected to a volt meter to measure the induced voltage. The meter can move either to the left or to the right to indicate the direction and magnitude of induced voltage. When the conductor is moved, the amount of magnetic flux contained within the electrical circuit (which includes the wire and the connections to the meter and the meter itself) changes. This change induces voltage through the conductor. Electromagnetic induction takes place whenever there is a change in the amount of flux coupled by a circuit.

In this case the motion of the conductor in the up direction causes more magnetic flux to be contained within the circuit and the meter ©Encyclopedia of Life Support Systems (EOLSS) ELECTRICAL ENGINEERING – Vol. III – Direct Current Machines – Edward Spooner needle moves in one direction. Motion of the conductor in the down direction causes less magnetic flux to be coupled by the circuit and the meter needle moves in the opposite direction. The principle demonstrated here is the basis for large-scale electrical power generation.

In order for an induced current to be developed, the conductor must be in a complete path or closed circuit, the induced voltage will then cause a current to flow in the circuit. 3. Current Carrying Wires and Coils The basic requirement of any electrical machine, whether ac or dc, is a method of producing torque. This section explores how two magnetic fields in a machine interact to produce a force which produces a torque in a rotating machine. U SA NE M SC PL O E– C EO H AP LS TE S R S – TO ACCESS ALL THE 34 PAGES OF THIS CHAPTER, Visit: http://www. eolss. net/Eolss-sampleAllChapter. spx

Bibliography

  1. Clayton, Albert E. , Hancock N. N. [1959] “The performance and design of direct current machines. ”
  2. Pitman Edwards J. D. (1991) “Electrical machines and drives : an introduction to principles and characteristics. ”
  3. Basingstoke : Macmillan Fitzgerald A. E. , Kinglsey C. Jr. , (1961) “Electric Machinery” 2nd Edition, McGraw Hill. [Comprehensive text on electric machines. ]
  4. Guru B. S. , Hiziroglu H. R. , (2001) “Electric Machinery and Transformers” 3rd Edition, New York, Oxford University Press.
  5. [Good general text on electrical engineering including machines. Say M. G. (1983). Alternating Current Machines, 5th Edition, London: Pitman. [This covers the more advanced theory of electrical machines]
  6. Biographical Sketch E. D. Spooner graduated from the University New South Wales, Australia, and obtained his ME in 1965. He is currently a project leader for Australia’s Renewable Energy Systems testing Laboratory and Lecturer in Electrical Engineering. His research has covered power electronics and drives and is currently focused in renewable energy systems. ©Encyclopedia of Life Support Systems (EOLSS)

Read more

Prewriting for the Process Analysis Essay

Whoever said life was going to be easy, the sooner everyone learns that the better off they will be. Life in itself is very stressful, but when you throw in being a wife, mother, student and employee it’s almost too much to bear. In order to deal with everything I must do I have a calendar with everything on it and how much time needs to be spent. Now, where should I begin? I’m a wife and therefore I must make sure my husband has clean work clothes and food for work. We all know how men don’t like to do their own laundry or cook.

At least my husband doesn’t like to. Don’t get me started on trying to keep my house clean with him and my animals, it’s like a tornado hits this place just about every other day. Dirty dishes, clothes, cat toys, food, cat litter anything you can just about think of. I also feed and clean up after the animals; I have five cats inside, two chameleons, and a cat outside. I must wake up every morning at 7 AM to feed them. I’m their mother now so they all depend on me. If all that is not enough I must also work for a living.

I clean a local movie theater in my town. I’ll tell you right now, if everything I deal with at home isn’t stressful enough when I get to work and walk through those doors the stress slaps me in the face. I just don’t understand how people can be so nasty. The restrooms are another story, they are so gross. I have never realized how dirty and inconsiderate people really are and that makes my life stressful when I have to pick up after all them. With all that being said I how could I possibly find any time to do my school work but I always seem to do it.

It’s very hard to do daily things then try and set down to do homework, especially the section I’m doing now with writing. I get so stressed out because I get very aggravated when I try and write because I’m not good at it. It seems to take me forever to get it done and that makes me just want to throw the pen and paper down and quit. But I seem to figure out a way to get it done and not go insane in the process. I’m not totally sure if there is a solid way to cope with stress but there are many ways to try and manage it.

When life seems to get too stressful or even out of control I will go outside and walk around while taking in fresh air to try and clear my head. I have picked up a nasty little habit a long time ago, I know it’s not good and I need to quit but it’s hard and it seems to really help in stressful times and of course this is none other than smoking. I guess it’s really just a matter of what kind of stress and how bad it is. Sometimes I turn everything off and turn on the radio and just lie on the bed and listen to the music.

Sometimes if it’s just bad enough I will end up crying and yes it sounds like a child but is seems to unleash the stress and wash it away as if its water in a bath tub when the plug gets taken out. Regardless of the situation I pull myself together before it gets out of hand. So many responsibilities every day is a bit tough, but there is always a way to do everything. The way I get everything done is just having a plan for each specific thing . Whoever said life was going to be easy, the sooner everyone learns that the better off they will be.

First thing, I must take care of my family/home life. There are a lot of things that must be done when it comes to my home responsibilities. I must make sure my house is nice and clean, we all know it’s not very pleasant living in a dirty home. With that I have to wash dishes, clean the bathrooms and put things away. Make sure everything is put up in the right places, make all beds up and vacuum all floors then sweep and mop. I got to gather all dirty clothes and get them done then there is the outside to deal with.

I always make sure the outside of my house is nice and neat, I don’t like to have my house looking like a bunch of wild animals live here. I keep my yard mowed; everything put up and kept in order. We burn wood so I make sure there is wood cut and split and stacked up out at the tree and I make sure to wood stack is neat and stacked well and covered up away from the weather. Next I must tend to my children that are the kind of non-human kind. That’s a whole task in itself; I believe that it’s probably harder than dealing with human children. They make things very stressful; they meow over and over like it’s going out of style.

I got to feed them and when that time comes you better make sure you watch out cause they all come running through the house all at once because they know what time it is and if you’re in their path they will knock you over like a pack of hungry wolves. I got to make sure their bathroom is clean and make sure the cats outside are feed and taken care of to. Then I have to turn my attention to my chameleons, I have to make sure they have water because they don’t drink water out of a dish so I have to do it out of a spray bottle or they will die.

I have to make sure they got bugs in their cages so they can eat. I have a lot of things to do on a daily basis, it get tough and very stressful so I have to make sure I plan everything down to the last thing because I would never have the time to do everything. I have a small window of time after I take care of my animals to do some school work so I try and buckle down and do as much as possible. Sometimes the lessons are hard but I got to try and clear all the stuff out of my head from everything else I have had to do and just try my best.

When work time comes around I get ready and head out the door. I have to clean the movie theater. There are seven theaters, two bathrooms, two hallways and one lobby; the place is pretty big and I do all this by myself. When all this is done I go back home and weather I’m tired or not I sit down and do more school work. When times are stressful I stop and pull myself together and calm down. When we get stressed out and just want to quit just to let the stress away, but weather you know it or not calming yourself down and working through it to get everything done is very much worth it.

Read more

Pressure Management on a Supercritical Airfoil

Pressure Management on a supercritical aerofoil in transonic flow Abstract-At transonic speeds an aerofoil will have flow accelerate onwards from the leading edge to sonic speeds and produce a shockwave over the surface of its body. One factor that determines the shockwave location is the flow speed. However, the shape of an aerofoil also has an influence. The experiment conducted compared Mach flow over a supercritical aerofoil (flattened upper surface) and a naca0012 aerofoil (symmetrical).

Despite discrepancies, the experiment confirmed the aerodynamic performance of a supercritical aerofoil being superior to a conventional aerofoil. A comparison of the graphical distributions demonstrates the more even pressure distribution on a supercritical aerofoil and a longer delay in shockwave formation. All of which, reflects the theory. Table of Contents Introduction3 Apparatus3 Induction Wind Tunnel with Transonic Test Section3 Mercury Manometer4 Procedure4 Theory and Equations5 Results6 Discussion10 Theory of Transonic Flight10 Relating the Theory to the Experiment11

Effectiveness of Supercritical aerofoils……………………………………………………………………… 11 Limitations and Improvements12 Appendix13 References14 Introduction For any object travelling through a fluid such as air, a pressure distribution over all of its surface exists which helps generate the necessary lift. Lift is an aerodynamic force which is perpendicular to the direction of the aerofoil. Transonic speeds result in the formation of shockwaves over the top surface of the aerofoil. This is due to accelerated flow over the surface region. We say this region is approximately between 0. 8-0. . Since the flow must accelerate and then will lose velocity following the shockwave the aerofoil will have a subsonic and sonic region. For the majority of commercial airlines this is not a desired region to cruise at given the instantaneous pressure distribution which passengers would otherwise experience. Particularly, the formation of shock induced boundary layer separation. Supercritical aerofoils are more efficient designed for higher Mach speeds and drag reduction. They are distinct from conventional aerofoils by their flattened upper surface and asymmetrical design.

The main advantage of this type of aerofoil is the development of shockwaves further away then traditional aerofoils and thus greatly reducing the shock induced boundary layer separation. In order to truly understand the effectiveness of a supercritical aerofoil an experiment gathering supercritical aerofoil performance and raw data of a naca0012 aerofoil will be extensively analysed and compared. Following the calculation and procedureit will be assessed whether a supercritical aerofoil is more effective. Apparatus

A wind tunnel with a transonic test section was used in this experiment to study transonic flow around an aerofoil. The test section consists of liners which, after the initial contraction, are nominally parallel apart from a slight divergence to compensate for growth of the boundary layers on the wall. In order to reduce interference and blockage at transonic speeds, the top and bottom liners are ventilated by longitudinal slots backed by plenum chambers. The working section has a height and width of 178mm and 89mm respectively. The stagnation pressure, p0? in the tunnel is close to atmospheric pressure, and therefore it can be taken to be equal to the settling-chamber pressure as the errors are only small. To minimise the disturbance due to the model itself, the reference stagnation pressure, p? , is taken from a pressure tapping in the floor of the working-section, well upstream of the model. The nominal ‘free-stream’ Mach number, M? , in the tunnel can be calculated from the ratio p? /p0?. The Mach number in the tunnel can be controlled by varying the pressure of the injected air, pj. The maximum Mach number that the tunnel can achieve is about 0. 8 Mercury Manometer A multi-tube manometer with mercury was used to measure the pressure at stagnation, the aerofoil tappings and atmosphere. The manometer is equipped with a locking mechanism which allows the mercury levels to be ‘frozen’ so that readings can be taken once the flow has been stopped. Also, the angle of the manometer can be adjusted. For this experiment, it was set to 45 degrees (Motellebi, F. ,2012). Procedure Before conducting the experiment, the barometric pressure, Pat, was recorded, in inches of mercury and the atmospheric temperature, in degrees Celsius, was also recorded.

For a range of values of Pj from 10 – 110 lb/in2, in intervals of 20lb/in2, Pj was then recorded along with the manometer readings corresponding to stagnation pressure (I0? ), the reference static pressure (I? ), airfoil pressure tappings (In, n=1 to 8 and 3a) and the atmospheric pressure (Iat), all in inches of mercury (Motellebi, F. ,2012). Results- Raw data in appendix x/c Figure 1b Cp against x/c at M= 0. 85 Figure 1a -Cp against x/c at M=0. 85 The experimental data was converted to absolute pressure values using Equation x ( units are inches of mercury).

For a given value of the pressure injector (Pinjector) we can find the value of the Mach number using Equation y. Also Equation Z calculates Cp( or pressure coefficents) which reflect the measurements of the surface of the aerofoil. These results are displayed in figure x. This was done for both the supercritical aerofoil and the NACCA 0012 aerofoil. What follows is a comparison and analysis of the data. ( Figure 2b Cp against x/c at Mach speed 0. 8 Figure 2a -Cp against x/c at Mach speed 0. 81 x/c x/c Figure 3b- -Cp against x/c at Mach speed 0. 72 Figure 3a –Cp against x/c at Mach speed 0. 3 Figure 4b –Cp against x/c at Mach speed 0. 61 Figure 4a –Cp against x/c at Mach speed 0. 61 Figure 5a- -Cp against x/c at Mach speed 0. 45 Figure 5b- -Cp against x/c at Mach speed 0. 44 Note that for both supercritical and naca0012 aerofoils the supercritical cases ( where M is equal to 0. 77, 0. 83 and 0. 840) the approximate value of x/c % where the shock occurs over the aerofoil is shown in red line. For the point below where Cp and the Cpcritical and hence the drop in Cp is greatest gives the location of where the shockwave occurs on the surface of the aerofoil. Cp and Cp* vs M? naca0012 aerofoil) Cp and Cp* vs M? (supercritical aerofoil) It is worth noting that for both the supercritical and Naca0012 aerofoil the results are somewhat similar. That is the critical Mach numbers for both are around 0. 72. Therefore the Minimum Mach number for a local shockwaves on both the supercritical and conventional aerofoil can be assumed to be the same. It is worth noting that Mach number 0. 41 for the supercritical aerofoil does not produce a shockwave, whereas the Naca0012 aerofoil does. Mach number| Supercritical Aerofoil Approx position of shock| naca0012 Approx position of shock| 0. 5| -| -| 0. 61| -| -| 0. 72-0. 73| -| 0. 25x/c%| 0. 85-0. 86| 0. 70x/c%| 0. 40x/c%| Basic transonic theory An aerofoil or any object for that matter travelling through a medium (air) at low Mach numbers ( typically between 0. 30-0. 40) has flow is subsonic and can be considered incompressible. This means that any change in pressure or density is significant. The speed of sound (a) is dependent on the altitude of the aerofoil/object and the Mach number M is the ratio of velocity: M=va , a=? RT ?is a specific heat ratio, T is thel absolute temperature and R is the gas constant.

The combination of these two equations above leads to: M=v? RT Sound is essentially a series of consecutive weak pressure waves emitted from a given source. These waves travel at the local speed of sound. If we assume the aerofoil is travelling towards the source, the source can notice the disturbances beforehand giving enough time for flow to adjust around the object. When the source begins to approach near the speed of sound, pressure waves move closer together in front of the object, therefore inadequate information from the source/disturbance is propagated upstream and the flow will not be able to react in time.

The pressure waves merge together to produce a shockwave in front of the object. The flow encountering the shockwave will experience changes in temperature, static pressure and gas density as well as a lower Mach number. The transonic region is special because although flight speed is below sonic speed as the information is propagated upstream on the surface of the aerofoil the flow accelerated to the speed of sound. Thus forming a shockwave over the aerofoil. The position of this shockwave depends on the initial entry speed to the aerofoil.

Therefore what we have in the transonic region is an aerofoil which has sonic speeds early upstream and subsonic speed towards the end of the aerofoil or downstream. This means it is complicated to accurately analyse transonic flow over an aerofoil as a different set of equations must be used on the leading edge, upper surface and trailing edge. The critical upstream Mach number is the minimum value of a given Mach number for which a shockwave will be produced on the surface of an aerofoil. In other words, supersonic flow.

Below this threshold a shockwave will not appear. Drag or the aerodynamic force in the transonic region again depends on the speed of the object travelling. At subsonic speeds the main component of drag are Skin friction, pressure drag and lift induced drag. At sonic speeds (approaching or exceeding) there is the addition of wave drag. The drag increases dramatically, and as a result a higher thrust is needed to sustain acceleration. Also, at this point the shockwave will interact with the boundary layer, thus causing it to separate upstream of the shock.

Figure 6Demonstration of transonic flight-(Scott, J. , 2000) The aerofoils The two aerofoils Naca0012 and Supercritical aerofoil are different in design and purpose. The Naca0012 is a basic symmetrical aerofoil used primarily for rudder and elevator movements. Aerodynamic performance is not taken into consideration and is thus reflected by the simple aerodynamic design. It is worth noting that there are better aerofoils. The Supercritical aerofoil is a performance aerofoil designed for higher Mach speeds and drag reduction.

They are distinct from conventional aerofoils by their flattened upper surface and asymmetrical design. The main advantage of this type of aerofoil is the development of shockwaves further away then traditional aerofoils and thus greatly reducing the shock induced boundary layer separation. Relating the Theory to the Experiment The critical Mach number for both the supercritical aerofoil and NACA0012 aerofoil was found to be in the region of 0,72. There is a difference to the nearest 10th but for all intents and purposes we can assume they are the same.

This indicates that the minimum Mach number for a shockwave to be produced on the surface of the aerofoils is equal and not influenced via the shape. The pressure distributions of the supercritical aerofoil ( especially at Higher Mach) in comparison to the Naca0012 are more evenly distributed. The experiment confirms the theory that the supercritical aerofoil in comparison ro a conventional aerofoil generates more lift due to an even distribution of pressure over the upper surface. (http://en. wikipedia. org/wiki/Supercritical_airfoil) Effectiveness of Supercritical aerofoils.

At a Mach number of 0. 45 both aerofoils do not display a shockwave. This is evident from the fact the Cp and Cp* graphs do not intersect at all. We already know this because the critical Mach number is 0. 72 for both. This indicates that either a shockwave was not produced (unlikely), or that the shockwave was produced beyond the trailing edge This means we cannot assess the effectiveness of the supercritical aerofoil at Mach speeds 0,45 and 0. 61. The supercritical Mach numbers show varying results. When the experiment took place at Mach ) 0. 72-0. 3 ( the critical Mach number) the supercritical aerofoil did not produce a shockwave ( Cp and Cp* do not intersect) whereas the naca0012 aerofoil did. The lack of a shockwave formation indicates either the critical Mach number for the supercritical aerofoil is higher then the conventional aerofoil experimental accuracy is lacking. At the supercritical mach numbers ( 0. 81-0. 86) in both the naca0012 aerofoil and the supercritical aerofoil Cp and Cp* intersect. The large drop in pressure coefficient is evidence of the formation of a shockwave.

However, the pressure drop in the supercritical aerofoil is occurring at a pressure tapping further downstream. This confirms the theory that a shockwave is produced further downstream in a supercritical aerofoil This seems to confirm the theory that a supercritical aerofoils design does allow for development of shockwaves further away then traditional aerofoils and thus greater reduction in the shock induced boundary layer separation. In regards to the amount of drag (aerodynamic force) acting on the aerofoils it is worth noting that the pressure distribution at 0. 5 Mach for the supercritical aerofoil is more evenly distributed and ‘flatter’ then the naca0012 aerofoil. There is no indication of a large instantaneous increase in drag taking over. This would therefore confirm the theory that a supercritical aerofoil is effective in greatly reducing the shock induced boundary layer separation. Notes for limitations The experiment is a success since results obtained confirm the capabilities of supercritical aerofoils and their advantages over conventional aerofoils. However, there are a few discrepancies which regarding experimental error and the different aerofoils.

First of all the mach numbers tested at 0. 72 and 0. 73 created an inaccurate experiment. Normally, this would not be a problem. However, since the critical mach numbers for both aerofoil’s were in the vicinity of 0. 72 it was expected this was the minimum threshold for a shockwave to be produced over the aerofoil. A shockwave was not produced for the supercritical aerofoil despite the critical mach number value. Therefore, we can conclude that at this speed there are too many inaccuracies to understand what is really going on.

We also did not really see a difference in performance at subsonic flow. Granted, the supercritical aerofoil was primarily designed for supercritical mach speed. No useful information was obtained from here. The fact the pressure tappings have different coordinates means that each aerofoil is showing the pressure distribution at a different set of coordinates. This of course, is not as accurate if the aerofoils had the same pressure tappings. For instance, the naca0012 has a pressure tapping at 6. 5% of the aerofoil section and the last ends 75% the rest is unaccounted for.

Since the supercritical aerofoil has different pressure tappings it means both aerofoils have different areas which are unaccounted for. This means it is not certain whether or not the graphs are a reliable source of information, yet alone to compare. A digital meter should also be connected that displays the pressure in the two tappings so the aerofoil can be appropriately adjusted to bring it to zero incidence. This digital meter can also be used to display the value of the mercury levels for other pressure tappings, reducing any human errors.

In order to increase the accuracy of the pressure distribution over the aerofoil surface, more pressure tappings can be made on the aerofoil. These will improve the pressure coefficient graphs by allowing more points to be plotted, in turn, yielding better information for the position of the shockwave in the supercritical cases and also the critical Mach number for a shock to occur. References 1) http://www. southampton. ac. uk/~jps7/Aircraft%20Design%20Resources/aerodynamics/supercritical%20aerofoils. pdf 2) http://www. nasa. gov/centers/dryden/pdf/89232main_TF-2004-13-DFRC. pdf 3)

Read more

Driver’s Ed Reflection 3&4

Vocabulary: Please define six (6) of the following terms in your own words. Please do not just copy and paste the definition. 1. Gravity- A invisible force that an astronomical object exerts on its surface. 2. Inertia-The property of a body by which it remains at rest or continues moving until affected by another force. 3. Potential Energy-The energy that a body or system has stored because of its position 4.

Kinetic Energy- The energy a body or system has because it is moving. 5. Friction- Resistance encountered by a moving object in contact with another object. 6. Traction-The adhesive friction between a moving object and the surface on which it is moving. 7. Centrifugal Force-An apparent force that seems to pull a rotating object away from a center. 8. Centripetal Force- a force that pulls a spinning object toward a center. 9. Deceleration- The property of an object slowing down. 10. Force of impact- Module 4 – Signs, Signals & Pavement Markings 1.

Explain the purpose of the following in complete sentence answers, using proper spelling and grammar: A. Broken yellow lines indicate: Broken yellow lines indicate passing zones for vehicles traveling on a two way, traffic opposing each other road. B. Yellow lines (broken or solid) indicate what type of traffic flow: The side of the road with the solid yellow line facing it is a no-passing zone, while the opposite side of the road, with the broken yellow line facing it, passing is allowed. C. Broken white lines indicate: The white line means traffic in both lanes is traveling in the same direction.

The broken lines indicate that drivers may change lanes. Observe and describe the different signs in YOUR city. Give specific examples of each (include color, shape, what the sign is for, etc. ). Write in complete sentences, using proper spelling and grammar. A. A regulatory sign: There are these white signs around key intersections in the town. They have written on them “Buckle Up It’s the law” with a white human stick figure who has a seatbelt on. Just so drivers know what state it is for, the put a green-colored image of the State of Florida. B. A motorist services sign:

When we are driving home and we are coming off the freeway, I always notice a big blue sign with categories. The categories sometimes say “Gas” or “Food” with the emblems of corporations such as Burger King or Shell gas. C. A recreational sign At the beach, there are signs put up far from land for boats. It usual says not to go past this point or beware of sharks and tidal waves. D. A sign that you know what it means because of its shape: *If there is not one of each of the above signs in your town describe any 3 different types you see in your community.

Answer in complete sentences, using proper spelling and grammar. 2. List 3 interesting or important facts from Module 3 and 4 using complete sentences and proper spelling and grammar: A. Recognize the color and type of lines on the road at all times, it could save your life. B. You cannot pass when a solid yellow line is on your side. C. Once you start through an intersection, keep going. Last-second changes may cause collisions. If you missed a turn, continue to the next intersection and work your way back to where you want to go.

Read more

Fractional Distillation Process

HEALTH, SAFETY AND ENVIRONMENT (CBB 2012) TITLE: FRACTIONAL DISTILLATION PROCESS ? CONTENTS 1. Title…………………………………………………………………………. 1 2. Summary……………………………………………………………………. 3 3. Introduction of Case Study……………. ……………………………………. 4 4. Risk Scenario Development……. …………………………………………… 5 5. Justification of Fault Tree Analysis…………………………………………6 6. Procedures of Fault Tree Analysis………………………………………….. 7 7. Fault Tree Analysis…………………………………………………………8 8. Possible Risk Associated with Hazards…. ………………………………… 11 9. Accident Consequences……………………………………………………. 13 10.

Method to Control the Risk………………………………………………… 15 11. Solution to Minimize the Risk…………………………………………….. 17 12. Conclusion…………………………………………………………………. 18 13. References…………………………………………………………………. 19 ? SUMMARY Crude oil is one the most important non-renewable sources on Earth. Demand for this black viscous liquid is growing every day in this era if modern technology. Electricity, vehicles and synthetics are among the major consumers of petroleum fluids or crude oil. Crude oil could be referred to as the ‘black gold’ due to its expensive price and complicated production process.

Unlike gold, crude oil naturally is useless in its primary form. A process called fractional distillation or petroleum refining need to be carried out onto the crude oil to separate it into various components which later could be used to supply electricity to residential houses or mobilizing vehicles. Fractional distillation or petroleum refining is the process of separating crude oil into different components based on their hydrocarbon chains. It is one of the most important major processes in the oil and gas industry.

Basically there are two types of fractional distillation which are in laboratory fractional distillation and industrial fractional distillation. Both have different method of conducting the process but utilize the similar concept. Industrial petroleum refining involves the separation of different length of hydrocarbon chain into specific refinery column which will produce products such as petrol, naphtha, kerosene and diesel. However petroleum refining has its own hazards and risk. It is highly flammable and could cause a major catastrophe to the plant.

The purpose of this report is to study a case scenario involving the fractional distillation process and its potential hazards and risks. INTRODUCTION In 1859, the petroleum industry began with the successful drilling of the first commercial oil well and the opening of the first refinery to process the crude into kerosene. The development of petroleum refining from simple distillation to today’s sophisticated processes has created a need for health and safety management procedures and safe work practices.

Refining is the processing of one complex mixture of hydrocarbons into a number of other complex mixtures of hydrocarbons. In response to changing consumer’s demand for better and different products, petroleum refining has evolved continuously. The original requirement was to produce kerosene as a cheaper and better source of light than whale oil. The production of gasoline and diesel fuels resulted from development of the internal combustion engine. Nowadays, refineries produce a variety of products. It was soon discovered that high quality lubricating oils could be produced by distilling petroleum under vacuum.

For the next 30 years kerosene was the product consumers wanted due to two significant events, first is invention of the electric light decreased the demand for kerosene and second, invention of the internal combustion engine which created a demand for diesel fuel and gasoline, also known as naphtha. Most of our modern lifestyle depends on oil. The largest oil refinery is the Paraguana Refining Complex in Venezuela, which can process 940,000 barrels of oil each day. Samuel M. Kier was the first person to refine crude oil and he used the flammable oil produced by his salt wells to light his salt works at night.

The burning crude produced an awful smell and a great deal of smoke. In 1850, Kier started experimenting with distillation and his refining experiments were successful and by 1851, Kier produced a product called Carbon Oil, a fuel oil which burned with little smoke and odor. By the end of the 1860s, Samuel M. Kier spent a great deal of his life trying to make crude oil useful and valuable and along the way he gave birth to the U. S. refining industry. A report based on fractional distillation or petroleum refining as a case study is used to determine the hazard and risk involve in the manufacturing process.

Therefore, safety precautions could be taken when countering an accident. RISK SCENARIO DEVELOPMENT Risk may be considered as the potential or adverse effects to human health or equipment loss resulting from an activity or event if exposed to a hazard. A risk scenario is an important concept before conducting a risk assessment. Based on the case study stated previously, a risk scenario involving petroleum refining processes will be developed in case of emergency situation during which time a certain procedure need to be followed to prevent any accident from happening.

There are various compartments in petroleum refining process. Instead of investigating a particular component which indicates specific process at a time, this report will generalize on all components involve in crude oil distillation. Petroleum refinery involved closed processes. Two categories of risks will be pointed out which are pollution risk and hazards risk. Pollution risk includes the release of chemicals into the atmosphere which could affect human’s health living near the refinery plant.

Apart from air pollution impacts there are also wastewater concerns. Wastewater is liquid waste discharged by domestic residences, commercial properties, industry agriculture which often contains some contaminants that result from the mixing of wastewater from different sources. Improper wastewater treatment could pose health problems to human. Noise pollution could also serves as a potential source of pollution due to industrial noise which could cause disturbance to residential area near the plant. Hazard risk involves explosion, fire and corrosion.

Heaters and exchangers in the atmospheric and vacuum distillation units could provide a source of ignition. Besides that, there is a potential for a fire to exist should a leak occur within the refinery. Wet hydrogen sulfide will cause cracks in steel which could leads to leak. The main hazard risk is corrosion which is a chemical hazard. Sections of the process susceptible to corrosion include preheat exchanger and hydrogen sulfide, preheat furnace and bottoms exchanger and atmospheric tower and vacuum furnace.

Efficiency in petroleum refinery is very crucial to reduce the cost of maintenance. Corrosion could cause efficiency decrement and the failure of equipment as well as interrupting the maintenance schedule of the refinery during which time all of the component must be shut down. Maintenance related to corrosion in the refinery is very costly and could reach up to billions of dollars. JUSTIFICATION OF FAULT TREE ANALYSIS Fault tree analysis (FTA) is used to analyze the case study. FTA is a failure analysis technique and it involves examining preceding events leading up to a system failure.

The tree starts with the accident event and working backwards through time, breaks it down into a series of contributory events that are structured according to certain rules and logic. This process of breaking down the event to identify contributory causes and their interaction continues until the root causes are identified. The logic diagram displays the various logical combinations of failures that can result in an accident. Advantages of Fault Tree Analysis 1. Easy to read and understand. 2. Can handle multiple failures or combinations of failures. 3.

Exposes the needs for control or protective actions to diminish the risk. 4. Quickly exposes critical paths. 5. The results can provide either qualitative or quantitative data for the risk assessment process. 6. Directs the analyst deductively to accident-related events. 7. Useful in investigating accidents or problems resulting from use of a complex system. 8. Excellent for ensuring interfaces are analyzed as to their contribution to the top undesired event. Weaknesses of Fault Tree Analysis 1. Though fault trees may reveal human error, they do little to determine the underlying cause. 2.

Fault trees require detailed knowledge of the design, construction and operation of the system. 3. Not suitable for assessing normal operations. 4. Fault trees may become very large and complex. 5. Significant training and experience is necessary to use this technique properly. Once the technique has been mastered, application stays time-consuming however commercial software is available. 6. Is not practical on systems with large numbers of safety critical failures. PROCEDURES OF FAULT TREE ANALYSIS 1. Identify a specific component that is to be analyzed. This will be placed at the top of the tree, in its own individual box. . Next, all the faults that are to be found within the component need to be identified. This can be done through brainstorming to identify the failures. 3. Faults each have their own box below the component. It is now necessary to work through why the faults have occurred. What were the causes? What actions resulted in these faults being created? 4. All the causes for the faults need to be identified and then set out in boxes, each one linked to the faults that are listed. 5. It is necessary to determine the root cause for each fault which may require listing several factors that contribute to the accident.

Therefore we could decide which factor that can be controlled and altered so that the fault could be avoided. Root causes are then linked to the general causes. 6. Identify countermeasures. Once all the causes and the root causes have been identified, countermeasures need to be listed. These are the antidotes to the root causes and will ensure that the faults are eliminated. Countermeasures are then linked onto the root cause boxes, because they show the actions that need to be taken. 7. There are two types of gates used in a fault analysis tree : a.

AND: where all the sub-faults and the other causative factors must co-exist so as to cause the fault for which they have been identified. b. OR: where the fault will occur even if only one of the sub-faults or the basic factors exists. ? FAULT TREE ANALYSIS In process of assessing and identifying the risks in a work environment, hazard analysis is the initial step to be taken. These are the types of hazard analysis: (i)Hazard & operability review (HAZOP) (ii)Failure mode & effect analysis (FMEA) (iii)Technique of operation review (TOR) (iv)Fault tree diagram (FTA) v)Human Error Analysis (HEA) (vi)Risk Analysis In our case scenario of the risk involving crude oil distillations processes, hazard analysis method that has been used is fault tree analysis (FTA). Definition of Fault Tree Analysis Fault tree analysis is a graphical representation of the major faults or critical failures associated with a product, the causes for the faults, and potential countermeasures. The tool helps identify areas of concern for new product design or for improvement of existing products. It also helps identify corrective actions to correct or mitigate problems.

Importance of Fault Tree Diagram FTA is the most suitable hazard analysis as it is useful both in designing new products/services and in dealing with identified problems in existing products/services. In the quality planning process, the analysis can be used to optimize process features and goals and to design for critical factors and human error. As part of process improvement, it can be used to help identify root causes of trouble and to design remedies and countermeasures. Figure 3: Fault-Tree Analysis diagram GRAPHIC SYMBOLS FOR FAULT TREE ANALYSIS EVENT SYMBOL

Function: It is divided into two:- primary and intermediate events •Primary events are not further develops one fault tree •Intermediate events are found at the output of a gate. GATE SYMBOL Function: It describes the relationship between input and output event Basic Event It is the failure or error in a system component or element. OR gate The output occurs if any of the input occurs Intermediate event Can be used immediately above a primary event to provide more room to type the event description, and also as an output of any gate AND gate The output occurs if both of the input occurs POSSIBLE RISKS ASSOCIATED WITH HAZARDS Since the process of fractional distillation undergoes the separation of the fractions are further converted using processes such as ‘cracking’ or ‘catalytic reforming’. Flammable hazards are therefore likely to be represented by many substances on a typical petrochemical refining plant. According to the Encyclopedia of The Earth, even though these are closed processes, heaters and exchangers in the atmospheric and vacuum distillation units could provide a source of ignition, and the potential for a fire exists should a leak or release occur.

In order for gas to ignite there must be an ignition source, typically a spark (or flame or hot surface) and oxygen. For ignition to take place the concentration of gas or vapor in air must be at a level such that the ‘fuel’ and oxygen can react chemically. The power of the explosion depends on the ‘fuel’ and its concentration in the atmosphere. The relationship between fuel/air/ignition is illustrated in the ‘fire triangle’. Gases and vapors’ released from crude oil refining processing activities cause harmful effects on workers exposed to them by being absorbed through the skin, inhalation or swallowed.

People exposed to harmful substances may develop illnesses such as cancer many years after the first exposure. According to Halma India News (2009) many toxic substances are dangerous to health in concentrations as little as 1ppm (parts per million). Like an example, given that 10,000 ppm is equivalent to 1% volume of any space, it can be seen that an extremely low concentration of some toxic gases can present a hazard to health. The flammable gas hazards occur when the concentration of gases or vapors exceed 10,000ppm (1%) volume in air or higher.

Furthermore, toxic gases typically need to be detected in sub-100ppm (0. 01%) volume levels to protect personnel. Gaseous toxic substances are very dangerous because of their invisibility and odorless. We cannot predict their physical behavior that can influence the properties of a gas leak. Hydrogen sulphide for example is particularly hazardous; although it has a very distinctive ‘bad egg’ odor at concentrations above 0. 1ppm, exposure to concentrations of 50ppm or higher will lead to paralysis of the olfactory glands rendering the sense of smell inactive.

This in turn may result in the assumption that the danger has cleared. Prolonged exposure to concentrations above 50ppm will result in paralysis and death. An excursion in liquids, pressure and temperature levels may also happened if automatic control devices undergo failure. The sections of the process that might be exposed to corrosion include, preheat exchanger, preheat furnace and bottom exchanger, atmospheric tower and vacuum furnace, vacuum tower, and overhead. It will also give crack in steel with the presence of H2S. The nitrogen oxides can form in the flue gases of furnaces when processing high-nitrogen crudes

ACCIDENT CONSEQUENCES According to European Agency for Safety and Health at Work (n. d) emphasized the employer should think about safety at the workplace because the employer are obliged by law to maintain a safe and healthy workplace. Besides that, most employers have a personal interest in guaranteeing the safety of their employees. In addition, the employee is responsible to think about safety at workplace. Accidents are an inevitable part of a job. However injuries not only cause loss of money but also cause pain and disruption to the workers and their family.

Actually they should know that working safely is part of their future life. Safety and health at the workplace is the responsibility of both employers and employees. European Agency for Safety and Health at Work (n. d) also said that a workplace accident is an injury or illness that occurs in relation to an employee’s job. Usually injuries occurs among workers resulted from an accident while performing duties and tasks. Through the Occupational Safety and Health Administration (OSHA) as it is commonly referred to Federal law provides employees with protection while working in a safe environment.

OSHA has enacted a set of rules and regulations that must be abided by all employers and as an alternative, the law also provide compensation for workers who have illness or injuries. David Greenberg Law (2012) explained a workplace accident that causes injury happens because of unsafe working conditions, defective equipment, lack of maintenance or a dangerous environment. There is a possibility for a worker to have the carpal tunnel syndrome due to jobs that required physical strain such as heavy lifting, working with hazardous materials or prolonged time in certain jobs such as typing.

Many factors could causes psychological illnesses such as stressful environment, discrimination or harassment, a motor vehicle accident occurring in the performance of job related duties, an injury caused by unsafe conditions or equipment and a slip and fall at work. Being injured at work has serious and sometimes permanent and irreversible effects on its victims and their families. The pain and suffering endured is not usually covered in worker compensation plans. If an accident happened, attorney from the law firm should be consulted before taking any other actions.

A workplace accident lawyer is best prepared to evaluate your particular situation and recommend the best course of action for you and your family. The workers bear about 30 percent of the total costs of workplace injury and illness. These costs include, but may not be limited to; loss of income, pain and suffering, loss of future earnings, past investment, and medical expenses. The problems created by an injury may provoke a necessity to shift to other jobs, retrain for other careers, and complete disability (handicap).

In addition, injured employee will face financial loss as a direct result of their injuries. Besides that injury or illness put a strain on relationships in a number of ways through emotional stress, financial pressure and isolation. Family and friends are deeply impacted and this eventually may lead to problems within their relationship. The injury or illness may result in a breakup or in a temporary or permanent loss of intimacy. Attitude and response received from employers, colleagues, supervisors, and others within the workplace could affect the psychological health during the physical recovery process.

The workers had to bear the risk that they need to change their lifestyle related to the injury where they and their families will endure costs, monetary losses. If you or a family member has been injured in the work place you should consult a law firm with workplace accident attorneys. ? METHOD TO CONTROL THE RISK After identifying the hazards in the industries we need to figure out a method to control the risk and therefore lower the chances of any accidents to occur. Some of the method we have managed to find is; Hierarchy of Controlling the Risk

To reduce the risk of a hazard occur during event is operated; the hierarchy of control should be used. The hierarchy represents the order of controls that need to be considered when selecting methods of controlling a risk from highest to lowest order control measure. The sequence of the hierarchy control is as below: i. Elimination of the hazards: -By getting rid of anything that can be related to the hazards such as equipment or substance completely but the elimination cannot cause the event become less effective to be operated. ii. Substitution with lesser hazard: By replacing the hazardous material into a lesser hazardous material. iii. Using engineering controls: -By redesign the hazardous material or the work processes. iv. Isolation of the hazard: -By separating the operator and the processes that being done with, such as physical barrier, or set a range of distance between the operator and the processes conducted or even by forming a hazardous area. -Example: providing a sealed cage area for fireworks. v. Using administrative controls: -Known as policy, procedure or behavior control such as time and hours of work, how to conduct the process and who can work at the specific area. By enforcing and applying the policy or the safe procedure that had been set. -Example: provide training in the work procedures and work processes. vi. Using personal protective equipment (PPE): -Last order of control in hierarchy of control. -Type of clothing or equipment or substance that can protect the operator from the hazard. However it is not highly recommended as it is not 100% guaranteed to be safe from the hazard. -Example: hearing protection, gloves, masks, hats, high visibility vest. Applying the hierarchy of controlling the risk is not enough to reduce the risk at the work place.

Therefore, other method needs to be implemented in order to create a truly safe workplace. System improvement Reduce or eliminate the possibility of a chemical release by choosing inherently safer materials and technologies. Besides that, reducing the potential severity of the impacts of a chemical release through mitigation measures (containment dikes, sprinkler systems) or emergency response plans should be carried out. Maintenances Maintenances need to be done regularly to spot any deformation or cracks in the distillation machine.

This can be done by creating a comprehensive schedule for maintenances so the machine is always working in its best condition. It is also important to make sure that the person who is in charge of the maintenance follow the schedule tightly to avoid any errors and future accidents from happening. ? SOLUTION TO MINIMIZE THE RISK Controlling of pressure, temperature and liquid levels are among the important criteria that must be taken into consideration in order to find the solution that might minimize the risk of chemical hazard in fractional distillation processes.

If automatic devices fail, an excursion could occur. We need to control devices within the operating parameters to prevent thermal cracking from happening in the distillation towers. To prevent unwanted crude from entering the reformer change, relief systems should be provided for overpressure and monitoring operations. There are some sections of the process that could easily be affected to corrosion include, preheat exchanger, preheat furnace and bottoms exchanger, atmospheric tower and vacuum furnace, vacuum tower and overhead.

When the metal temperatures exceed 450? F, some corrosion might be happened in furnace tubing and in both atmospheric and vacuum towers. This is the place where sour crudes are processed. Wet hydrogen sulfide also will cause cracks in steel. Nitrogen oxides are very corrosive with the presence of water at low temperature. Nitrogen oxide is produced from the flue of gases furnace during high-nitrogen crudes processing. We can take initiative from chemicals such as hydrochloric acid to control corrosion that produced in distillation system.

This is one of the solutions to reduce the chemical hazards. An alkaline solution such as ammonia may be injected into the overhead streams or feeding hot crude-oil to initialize the condensation process. However, usage of ammonia must be accompanied by enough supply of water. If not, ammonia chloride will be deposited which could cause severe corrosion. Appreciable amounts of water may contain in the crude feedstock that can be heated until boiling point and cause vaporization explosion when in contact with the oil in the unit. ? CONCLUSION

This report is about potential sources of hazards in a working environment. The case study chosen is related to the Oil and Gas industry which is entitled ‘Fractional Distillation’. Fractional distillation is used for separating a mixture of substances with narrow differences in boiling points and it is the most important process in the oil and gas industry. Based on this report many hazards might happen in the refinery processes especially hazards related to chemicals. Therefore chemical hazards are chosen as the main hazards in the case study.

Fault tree diagram is used to conduct the hazard analysis. This method is a graphical representation of the major faults or critical failures associated with the product, causes for the faults and potential countermeasures. It helps to identify corrective actions to correct or mitigate problems. This type of analysis can be used to optimize fractional distillation process features and goals and to determine critical factors and human error in the oil and gas refinery industry. Hence, we can find the solution for these problems if an accident occurs.

As a conclusion the chemical hazards could occur due to ignition or fire that is produce from a leakage or cracks inside the distillation units. Should an accident occur, employers must think and react quickly to contain and control the situation. Employees on the other hand need to be alert all the time about the safety condition at their workplace. Adequate rules at workplace are setup by several organizations such as European Agency for Safety and Health at Work, Occupational Safety and Health Administration (OSHA) and David Greenberg Law to increase the awareness of workers toward safety.

The solution were taken in order to minimize the risk is to stop the sources of accident which are leakage and corrosion. By using alkaline chemicals, the corrosion process of the distillation unit wall could be controlled thus reduce the number of accidents. ? REFERENCES Bastidas,L. M. (March 12,2012) . Fractional Distillation of Crude Oil. Retrieved August 11th, 2012 from http://wiki. nanjingschool. com/users/laurencemclellanbastidas/weblog/87653/Fractional_Distillation_of_crude_oil. html Fault Tree Analysis. Retrieved August 12th ,2012 from http://www. hf. faa. gov/workbenchtools/default. spx? rPage=Tooldetails&subCatId=43&toolID=120 Freudenrich,C. Howstuffworks “How oil refining Works. Retrieved August 10th ,2012 from http://science. howstuffworks. com/environmental/energy/oil-refining4. htm Gas Hazard in Petrochemical Industry. (July 2009). Retrieved August 11th, 2012 from http://halmapr. com/news/india/2009/07/03/gas-hazards-in-the-petrochemical-industry/ Health and Safety Aspects of Petroleum Refining. (October 31st,2008) Retrieved August 12th, 2012 from http://www. eoearth. org/article/Health_and_safety_aspects_of_petroleum_refining#gen8 Kinyanjui,L.

The Advantages and Disadvantages of Fractional Distillation . Retrieved August 11th ,2012 from http://www. ehow. com/info_8513450_advantages-disadvantages-fractional-distillation. html Making crude oil useful. Retrieved August 10th ,2012 from http://tfscientist. hubpages. com/hub/making-crude-oil-useful-fractional-distillation-and-cracking Pearsonlongman. com. Retrieved August 10th ,2012 from http://www. pearsonlongman. com/technicalenglish/pdf/level2/level2_unit8. pdf Ritcher,L. Fault Tree Analysis Template in Excel. Retrieved August 12th ,2012

Read more

Measuring the Resistivity of Copper Wire of Different Lengths

In this report, I will be writing about the experiment I will conduct on copper wire of different lengths. The dependent variable I will be measuring is the resistance of the Copper wire. To do this experiment, one needs to obtain measurements with a high degree of accuracy, taking care of the equipment they use, and measuring each value to a certain degree of accuracy for all results. The problem with measuring the resistivity of Copper wire is due to the properties of copper as a material.

Copper naturally has a low resistance due to it being a superconductor, meaning that it only has a resistance of minute amounts. As it has this property, it is important to use a copper wire specimen that is long enough and thin enough to have an appreciable resistance. The normal value for the resistivity of copper is about 10-8Ωm. A 1m length of copper wire with a cross-sectional area of 1mm² (10-6m²) can be predicted to have a resistance of 0.01Ω. This can be calculated by using the resistance formula of: R=ρlA≈ 10-8 Ωm x 1m10-6m2=10-2Ω

The wire I will use is going to be thinner than this and will vary in length from 0. -1. 0 metres with a difference of 0. 2m from the previous wire specimen. In total, I will have 5 different lengths. Apparatus:

  • Voltmeter- Accuracy stated as (± 0. 5% Read. + 1dgt) in the user manual
  • Ammeter- Accuracy stated as (± 1. 2% Read. + 1dgt) in the user manual
  • Battery Supply of 6V
  • Copper Wire
  • 1m Ruler in cm
  • Scissors
  • Electrical Wires
  • Crocodile clips
  • Micrometer

Method: The following procedure described below is how I intend to gain my results:

1. I will measure out the different lengths of copper wire I intend to use using a millimeter ruler to gain the most accurate results I can.

2. Once the lengths are cut, the diameter of the copper wire I am using must be measured. To gain the most accurate result, I will use a micrometer and measure the diameter in several places on the wire and take an average value from these readings to work out the average cross-sectional area.

3. I will connect the first length of wire into an electrical circuit, making sure that current can flow through the entire length of the copper wire connected. The circuit will look like this diagram:

V V A A

4. The voltage will be recorded across the wire and the current running through it.

5. To find the resistance of the wire I will use the formula V=IR.

6. The resistivity can then be worked out using the formula: ρ =RAL where R is the resistance calculated, A is the cross-sectional area of the copper wire calculated and L is the length of the copper wire. The measurements shall be recorded in the following table shown below:

Resistivity of Wire

The physical properties of a wire can either be categorized as being an intrinsic property of an extrinsic property. The difference between the two categories of properties is that intrinsic properties do not depend on the amount of material that is present, whereas extrinsic properties do depend on the amount of material that is present.

In the following investigation of the resistivity of copper wire, one could say that the value of the voltage, resistance and current are all intrinsic properties of the copper wire. The extrinsic value of the copper wire would be its resistivity. The resistivity of the copper wire will be dependent on the material itself, which is copper. The resistivity of a material can be defined as the resistance of a 1m length with a 1m² cross-sectional area. As the resistivity of material depends mainly on the properties of the material itself, each material whether it is copper or pure silicon has its own resistivity coefficient. The coefficient for copper is 1.72 × 10-8Ωm. This value may seem very small for resistivity, but if one were to know that copper is classed as a superconductor meaning that it conducts electricity extremely well, they would know that in order for the conductance to be very high, the resistivity must be very low. This can also be explained by the fact that resistivity is the inverse of conductivity (σ =1/ρ ). The potential difference across the copper wire (measured in volts) and the flow of charge (the current) through the copper wire are related through the resistance of the copper wire, not its resistivity.

In order to find the resistivity, one needs to work out the resistance first by using the equation R=VI, and then from this they can use the formula ρ =RAL to find the resistance. The “A” represents the cross-sectional area of the wire that will be used in the experiment. The resistance of the wire is expected to double in value when the length of the wire doubles in size. The resistivity, however, should stay near enough the same throughout all of the repeats conducted. Reducing the uncertainty in the results

There are some factors that could affect the accuracy of my results in the experiment of the resistivity of copper wire. One of the factors which could affect the accuracy of my results is to do with the measuring devices I use to conduct the experiment. Any measuring device can only be used to measure a certain degree of accuracy. It is this certain degree that determines how accurate your results are to the true value. In my experiment, I am using a 3½ Digits Multifunction Multimeter (DMM) to measure the current through the circuit and the potential difference (p. d. ) across the copper wire.

The main advantage of using a DMM compared to using an analog voltmeter is the fact that they allow you to record value to a certain number of decimal places by having different ranges that correspond to the level of precision of the reading. In the experiment I am conducting, I will be measuring the p. d. to a resolution of 0. 001V using the 2V range on the multimeter. Having the resolution to this degree of measurement ensures that I get a voltage reading to 3 decimal places increasing the accuracy of the reading and allowing me to obtain a closer value to the true value.

The accuracy of the ammeter has been published as being ±1. 2% of the reading + 1 LSD for the range (200mA) and resolution (0. 1mA) I will be using for the current. This means that the value I will record will be 1. 2% of the true value of the current +0. 1mA. I am using the 200mA range rather than the 20A range because the resolution of the result is greater than that of the 20A range. This will record a more accurate result which reduces the uncertainty in my results. Similarly, the range I will use on the voltmeter which is at 2V has an accuracy of ±0. 5% of the reading + 1 LSD, which is even more accurate.

Another factor that can affect the resistivity of the result is the temperature of the copper wire. This can affect the resistivity by changing the value of the resistance to make the resistance less proportional to that of the length of wire. Normally the resistance of the wire will increase as the length of the wire increases due to their being more atoms in the wire for the electrons to pass by in order to get through the entire length of wire. As the increase in resistance α increase in length, the resistance should double when the length of the copper wire is doubled.

In order to try and make sure the resistance is not affected by temperature, I will connect the copper wire up into the circuit at a low voltage so that the copper wire will not warm up and increase in resistance due to the atoms inside vibrating more. I will also be using a micrometer to measure the diameter of the wire. I am using a micrometer instead of a standard cm ruler because the level of uncertainty is far less than that of a ruler. The micrometer allows me to record a value for the diameter of the wire with an uncertainty of ±0. 0005mm, whereas with an ordinary ruler with mm markings, the uncertainty would be ±0. 1mm.

Results:

These are the results I collected from the experiment carried out. All of the data is raw data that I have collected myself and has not been manipulated in the way at all. N. B- The diameter of the wire was measured to be 0. 435mm. The cross-sectional area was calculated as being 1. 48 × 10-7m2. This value was used throughout the experiment to work out the different resistivity values using the resistivity equation as stated previously.

Repeat Length of Wire (m) Voltage (V) Current (A) Resistance (Ω ) Resistivity (Ω m)
1 0. 2 0. 044 1. 911 0. 023 1. 71E-08
2 0. 2 0. 042 1. 907 0. 022 1. 64E-08
3 0. 2 0. 043 1. 909 0. 23 1. 67E-08
1 0. 4 0. 088 1. 882 0. 047 1. 74E-08
2 0. 4 0. 085 1. 879 0. 045 1. 68E-08
3 0. 4 0. 087 1. 869 0. 047 1. 73E-08
1 0. 6 0. 132 1. 839 0. 072 1. 78E-08
2 0. 6 0. 135 1. 845 0. 073 1. 81E-08
3 0. 6 0. 129 1. 839 0. 070 1. 74E-08
1 0. 8 0. 158 1. 748 0. 090 1. 68E-08
2 0. 8 0. 163 1. 741 0. 094 1. 74E-08
3 0. 8 0. 159 1. 745 0. 091 1. 69E-08
1 1. 0 0. 207 1. 739 0. 119 1. 77E-08
2 1. 0 0. 209 1. 738 0. 120 1. 79E-08
3 1. 0 0. 201 1. 710 0. 118 1. 75E-08

From the table above, I also worked out the averages of the results measured from the experiment.

Repeat Length of Wire (m) Voltage (V) Average V Current (I) Average I Resistance (Ω ) Average R Resistivity (Ω m)
1 0. 2 0. 044 0. 043 1. 911 1. 909 0. 023 0. 023 1. 71E-08
2 0. 2 0. 042 1. 907 0. 022 1. 64E-08
3 0. 2 0. 043 1. 909 0. 023 1. 67E-08
1 0. 4 0. 088 0. 087 1. 882 1. 877 0. 047 0. 046 1. 74E-08
2 0. 4 0. 085 1. 879 0. 045 1. 68E-08
3 0. 4 0. 087 1. 869 0. 047 1. 73E-08
1 0. 6 0. 132 0. 132 1. 839 1. 841 0. 072 0. 072 1. 78E-08
2 0. 6 0. 135 1. 845 0. 073 1. 81E-08
3 0. 6 0. 129 1. 839 0. 70 1. 74E-08
1 0. 8 0. 158 0. 160 1. 748 1. 745 0. 090 0. 092 1. 68E-08
2 0. 8 0. 163 1. 741 0. 094 1. 74E-08
3 0. 8 0. 159 1. 745 0. 091 1. 69E-08
1 1. 0 0. 207 0. 206 1. 739 1. 729 0. 119 0. 119 1. 77E-08
2 1. 0 0. 209 1. 738 0. 120 1. 79E-08
3 1. 0 0. 201 1. 710 0. 118 1. 75E-08

Uncertainties within my results: Before creating the graph of my results, I calculated the overall uncertainties of each measurement within this experiment, so that I could see where the most uncertainty of the average resistivity value comes from.

To calculate the uncertainty for each measurement, I took the average measurement that had the biggest difference from its original data. The Percentage of uncertainties of each measurement was as follows:

  • Percentage uncertainty of the Voltage V= 0. 206±0. 005 V Uncertainty in V= 0. 005/0. 206 × 100% ≈ ±2. 43%
  • Percentage uncertainty of the Current I=1. 729±0. 019 A Uncertainty in I=0. 019/1. 729 × 100% ≈ ±1. 10%
  • Percentage of Uncertainty in Resistance R=V/I Uncertainty of R=1. 10%+2. 43% ≈ ±3. 53%
  • Percentage of Uncertainty in Length Uncertainty=0. 6±0. 001m

Uncertainty in L=0. 001/0. 6 × 100% ≈ ±0. 17%

Percentage of Uncertainty in Area: The Diameter of the wire is 0. 435±0. 0005mm

The best area where the diameter is 0. 435mm A=? 0. 21752? 0. 1486mm2? 1. 486 × 10-7m2

The Maximum area where the diameter is ? 0. 4355mm A=? 0. 217752? 0. 1489mm2? 1. 489 × 10-7m2

The Minimum area where the diameter is ? 0. 4345mm A=? 0. 217252? 0. 1482mm2? 1. 482 × 10-7m2

So the area is 0. 148±0. 0004mm2 with a percentage uncertainty of: A=0. 0004/0. 148 × 100% ≈ ±0. 27%

  • So the percentage uncertainty in the Resistivity can be calculated as the sum of all the uncertainties in the experiment:

ρ =RAL=3. 53%+0. 27%+0. 17%=±3. 97%

The percentages of instrument error are as follows:

  • Voltmeter reading is ±0. 0005V Instrumental error in Voltmeter= 0. 0005/0. 206 × 100 ≈ 0. 24%
  • Ammeter reading is ±0. 0005A Instrumental error in Ammeter=0. 0005/1. 729 × 100 ≈ 0. 03%
  • Micrometer reading is ±0. 0005mm Instrumental error in Mircometer=0. 0005/0. 435 × 100 ≈ 0. 11%
  • The total instrumental error is the total of each instrumental error stated above which would be 0. 38%.

Graph 1: Graph 2:

Data Analysis: In all of the results that I have collected, there is a strong relationship between the increasing length of wire and the value for the resistance.

One would expect this strong correlation between the resistance and the length since one of the simple laws of electrical resistance is that it increases proportionally with the increase in the length of the wire. One can explain this through the understanding of electrons in a circuit and the atoms arranged within the components in a circuit. With my experiment of copper wire, a current passed through my circuit once a voltage was applied to the circuit. When the electrons were given the energy to move they passed through the circuit to the copper wire where they experienced the resistance which was calculated.

As the lengths of the copper wire increase, the number of fixed atoms within the structure of the wire increases. Due to this, the electrons have a higher chance of colliding with the fixed atoms, which causes the wire to heat up and increase the resistance. One can see the certainty in the correlation between the average resistance and the length of the copper wire by looking at the gradient of the line of best fit within graph 1. The gradient shows that R? =0. 9984, showing an extremely strong positive correlation between the two variables.

From the equation of the gradient displayed in graph 1, the average resistivity can be calculated which takes into account all of the points within the data collected. The gradient of the line shows the equation Resistance (R)Length (L). In the calculation for resistivity, one not only needs the value of RL but also needs the cross-sectional area of the wire. If the cross-sectional area of the wire is multiplied by the gradient, then the average resistivity can be calculated: ρ =RAL=0. 1192? 1. 486 × 10-7m2? 1. 77 × 10-8Ω m

In Graph 2, the percentage of the uncertainty of each average resistance was displayed in the vertical error bars.

The percentage of the uncertainty of the length of the wire was so small that it was not worth adding to the graph since it is extremely hard to see on the graph. From these percentage uncertainties of the average resistance in the experiment, one can calculate the maximum and the minimum values for the resistivity from looking at the gradients like we did for graph 1. To calculate the minimum gradient, I took the gradient of the line from the maximum uncertainty in the lowest resistance to the minimum uncertainty of the highest resistance.

I did this to obtain the shallowest gradient possible from all the points on the graph. I then multiplied this gradient by the smallest area value. lowest ? =0. 1144? 1. 482 × 10-7m2? 1. 70 × 10-8Ω m For the maximum value of resistivity, I took the value of the gradient of the line from the minimum uncertainty in the lowest resistance to the maximum uncertainty of the highest resistance. I did this to obtain the steepest gradient possible from all of the points on the graph. I then multiplied this by the maximum area.

maximum ρ =0. 1263? (1. 489 × 10-7m2)? 1. 88 × 10-8Ω m

After looking at the average, minimum and maximum values of the resistivity taking into account all of the uncertainties within the calculation one could say that from the investigation conducted, the resistivity of copper wire is 1. 76 × 10-8±1. 2 × 10-9. The percentage uncertainty of the resistivity would then be: 1. 2 × 10-9/1. 76 × 10-8 × 100% ≈ 6. 8% Biggest Source of Uncertainty From looking at all of the percentage uncertainties for all my measurements, the resistance produced the most uncertainty. The uncertainty of resistance was worked out by adding up the uncertainty of the voltage and the current measured.

It must have been from these two calculations where the uncertainty of the resistance became noticed. From calculating the instrumental errors of the multimeter used as a voltmeter and an ammeter, I would not conclude that the vast majority of the error came from the accuracy of the apparatus. I would say that the average resistance I calculated was from the average current which had the biggest difference from its original data, and the average voltage which had the biggest difference from its original data. The average data I had chosen was 0. 206±0. 05V and the average data I had chosen for the current was 1. 729±0. 019A, as they had the biggest uncertainties. Due to this fact, I would have produced an uncertainty that had the biggest difference from the original value, so the maximum possible uncertainty for the resistance. Anomalies and Systematic Errors I did not have any anomalous results when looking at the average resistance graph. All of the points plotted to show a strong correlation with the increase in length. Systematic errors may have contributed to some of my resistivity values being higher or lower than my overall average.

An example of this could have been when measuring the diameter of the copper wire. The micrometer did not let me know if both of the sides of the copper wire were touching the micrometer measuring device sufficiently enough or whether or not it was touching both sides of the copper wire more than enough, which would then mean it squashed the diameter of the wire resulting in a lower diameter at certain points across the wire since I took 3 readings and averaged them out. If this was the case, then one of my wires may have had a higher resistance than the others.

One other systematic error may have come from the battery pack. It may have had a temporary glitch in which less electrical energy was sent through the circuit meaning less current was flowing through the circuit, resulting in a larger resistance than that of the previous recording with the same length of wire. This would also alter the final value of the resistivity. Another uncertainty that would be counted as the human error could have been the position at which I had placed the crocodile clips at either end of the copper wire.

For the same length of wire, the crocodile clip may have been placed further away from the end of the copper wire than the previous measurement, meaning that the length of the wire would have decreased marginally which may have resulted in a lower resistance recording. Also, when I measured the length of the copper wire, I had to straighten out the length of the wire since it was coiled. When doing this I may have accidentally pulled the length of the wire increasing its length by a fractional amount.

Having said this, it may have altered the resistance measured in the wire making it larger than it should have been since the electrons have to travel a longer distance. Evaluation After looking at all of my results, I believe that the method I used and the ways of reducing the uncertainty in my experiment were effective. The instrumental errors were minimal and the overall uncertainty of my final calculation of resistivity was a low value. The resistivity value itself did alter but mainly stayed constant throughout the experiment.

As I have said, I do not believe this was because of the accuracy of the multimeters I used but due to other factors such as changes in the environment like temperature, or due to systematic errors to do with the battery pack I used. To decrease the uncertainty in my resistance measured, I could use an even lower resolution on my voltmeter (0. 1mV) and ammeter (0. 001mA) to reduce the negative effect of Least Significant Digits (LSD) and to give the most accurate result.

This way I could then increase the precision of my results and record a value which is closer to the true value When comparing my average value of resistivity with the published value of resistivity which is 1. 72 × 10-8Ω m, my average value is very close to the published value which shows the level of accuracy throughout my experiment considering the more precise tools that were used by the professionals to gain the published value. The repeats I did help me to record a value for the resistivity that was close to the published value by reducing the random uncertainty in my results.

To gain even more accuracy I could do more repeats, or I could alter the intervals between each length to 0. 1m to increase my range of data. That way I will reduce even more random error within my data. I could also change the different diameters of the wire or change the material I use to compare these results with those and see how they differ. One other change I could do next time is to use an Alternating Current (AC) rather than a Direct Current (DC) since AC is more conventional in houses so it would have provided further information as to how good copper is in the use of houses.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp