What Is Balance in Art and Why Does It Matter

Balance is one of the most crucial principles in art’s domain. Balance and symmetry are interconnected. But they are not exactly the same thing.

Take a look at their definitions:

  • Symmetry — The visual quality of the repeating parts of an image along an axis, along a path, or around a center.
  • Asymmetry, on the other hand, refers to everything that is not symmetrical.
  • Balance is a visual principle whereby a design looks equally balanced throughout the composition.

Artist’s basically depends on balance when they are trying to create visual weight on their works. Balance measures the visual weight of your composition, which affects how much each element draws the attention of your audience. Key elements used in order to achieve balance are lines, shapes as well as color. There are two kinds of balance namely the symmetrical and asymmetrical balance.

  • Symmetrical balance – Symmetrical balance occurs when your composition has the same visual weight on each side of the axis. Imagine perfect mirror images looking at each other around a central axis. This type of balance causes grace and simplicity. It’s nice to look at it, but also very predictable.
  • Asymmetric balance – A composition with an unequal weight on both sides has an asymmetric balance. This visual method, more interesting from the point of view of symmetry, has a large focal point on the one hand and several less significant focal points on the other.

Symmetrical balance often goes with the name formal balance whereas asymmetrical balance is also known as informal balance.

Vitruvian Man by Leonardo Da Vinci

Any work of art must be balanced. Moreover, even the most expressive and dynamic composition is still subject to the law of equilibrium, because only in this way will it be completed. But what does this mean?

When an artist depicts any objects, he distributes them on the plane of a sheet or canvas. Some of them are larger, some are smaller. Some are located on the left, some on the right … This very distribution of objects in the sheet can create a feeling of movement, displacement, balance, etc.

In order to know which kind of balance is used by an artist one needs an imaginary line at the center of the artwork. Formal balance is defined as something which depicts similar things from both sides of your imaginary line. In informal balance on the other hand both sides of imaginary lines contain different things and thus the other side seems to be a little heavier as compared to the other side. Informal balance is more psychological in nature in that it is usually felt rather than noticed.

Most of Da Vinci’s works contains symmetrical balance. Da Vinci’s Proportion of the Human Figure is highly symmetrical in nature. This particular work depicts how a human body could create different shapes such as circle, square, and triangle. In this particular work Da Vinci created balance by creating appropriate proportion which gave birth to balance. The appearance of symmetrical balance in the Proportion of the Human Figure could be attributed to the fact that there had been an approximate symmetry in this particular work.

As compared to Leonardo Da Vinci, Deborah Butterfield uses more asymmetrical balnce on her works rather than symmetrical. An example of asymmetrical balance on her work could be found on her Verde. By drawing an imaginary line at the center we could basically see that the one side is heavier than the other and they depict different things as compared to Da Vinci’s Proportion of the Human Figure. In this particular work Butterfield used steel to achieve her desired end.

The head of the horse seems to be weightless since it seems like merely a stick protruding as compared to its body which seems to be full of steel and thus appears heavier. This technique allowed her a great example of depicting asymmetrical balance in art.

When confronting symmetry with asymmetry, it must be remembered that the visual mass of a symmetric figure will be greater than the mass of an asymmetric figure of a similar size and shape; symmetry creates balance in itself and is generally considered beautiful and harmonious. But there is a flip side to the coin – it is often devoid of dynamics and may seem static and boring; asymmetry, as the antipode of static symmetry, usually brings dynamics to the composition.

Composition is an integral part of a work of art. Compositional rules, techniques and means are based on the rich creative experience of artists of many generations, but the compositional technique does not stand still, it is constantly evolving, enriched by the creative practice of new masters.

References

  • “Principles of Design,”; The Artspace Team, 1997) Reference: Principles of Design [Electronic Version] from http://sjc. ceu. edu/departments/art/chp03. pdf.
  • The Artspace Team. (1997). Balance [Electronic Version] from http://www. peonqueen. com/ArtSpace/temp_exhib/art2/bal. html.

Read more

Steady State Theory and Pulsating Theory

In cosmology, the Steady State theory (also known as the Infinite Universe theory or continuous creation) is a model developed in 1948 by Fred Hoyle, Thomas Gold, Hermann Bondi and others as an alternative to the Big Bang theory (known, usually, as the standard cosmological model). In steady state views, new matter is continuously created as the universe expands, so that the perfect cosmological principle is adhered to.Theoretical calculations showed that a static universe was impossible under general relativity, and observations by Edwin Hubble had shown that the universe was expanding. The steady state theory asserts that although the universe is expanding, it nevertheless does not change its appearance over time (the perfect cosmological principle); it has no beginning and no end. The theory requires that new matter must be continuously created (mostly as hydrogen) to keep the average density of matter equal over time.

The amount required is low and not directly detectable: roughly one solar mass of baryons per cubic megaparsec per year or roughly one hydrogen atom per cubic meter per billion years, with roughly five times as much dark matter. Such a creation rate, however, would cause observable effects on cosmological scales. Dust-Cloud Theory. Between 1940 and 1955 the German astronomer Carl f. von Weizsaccker, the Dutch-American astronomer Gerald P. Kuiper and the U. S.

chemist Harold C.Urey worked out a theory that attempted to account for all the characteristics of the solar system that need to be explained. According to their dust-cloud theory, the solar system was formed from a slowly rotating cloud of dust and gas that contracted and started to rotate faster in its outer parts, where eddies formed. These eddies were small near the center of the cloud and larger at greater distances from the center. The distances corresponded more or less to the Titius-Bode relation.As the clouds cooled, materials coagulated near the edges of the eddies and eventually formed planets and asteroids, all moving in the same direction. The slowly rotating central part of the cloud condensed and formed the sun, and the sun’s central temperature rose as gravity further compressed the material.

When nuclear reactions eventually began in the suns interior, about 5 billion years ago, much of the nearby gas was blown away by the pressure of the sun’s emitted light.Nevertheless the earthy retained an atmosphere consisting of methane, ammonia, carbon monoxide, water vapor, and nitrogen, with perhaps some hydrogen. In this primitive atmosphere and in the seas below it, organic compounds were formed that eventually resulted in living organisms. The organisms evolved in the next 2 billion years into higher plants and animals, and photosynthesis by plants and the weathering of rock produced the oxygen in the earth’s atmosphere.Although free gases near the sun were blown outward 4 to 5 billion years ago, according to the dust-cloud theory, the giant planets were too distant to be much affected. They are large, therefore, and contain a great amount of hydrogen. The comets, in turn, are thought to be the outer part of the primordial nebula, left behind as the inner part condensed to form the sun and the planets.

The Dutch astronomer J. H. Oort speculated that this material condensed into chunks that continue to move along with the sun through space.Now and then a chunk is perturbed and falls slowly toward the sun. As it is heated by sunlight, it grows a coma and tail. The dust-cloud theory thus explains the solar system characteristics listed above. It is most weak in detailing the process whereby the planets and asteroids formed from solids that made up only a small percent of the primordial nebula.

However, this is essentially a chemical problem, strongly dependent on the sequence or timing of events such as eddy formation, temperature changes, and the start of solar luminosity. Pulsating Theory: According to this theory, the universe is supposed to be expanding and contracting alternately i. e. pulsating. At present, the universe is Expanding. According to pulsating theory, it is possible that at a certain time, the expansion of the universe may be stopped by the gravitational pull and the may contract again. After it has been contracted to a certain size, explosion again occurs and the universe will start expanding.

The alternate expansion and contraction of the universe give rise to pulsating universe.

Read more

Detection of Backlash Phenomena in the Induction Motor

Table of contents

The problem our thesis work will solve is to reduce backlash in the induction motor. Backlash is described as a mechanical form of dead band that can lead to error on hole location, if the motion required to machine the holes causes a reversal in axis direction it also lead to loses of motion between input and output shafts, making it difficult to achieve accurate center in equipment such as machines tools etc. The main problem are vibrations from motor as a result of high ripple torque in the motor.

The motor is a kind of an AC machine in which alternating current is supplied to the stator directly and to the rotor by induction from the stator. Induction motor can appear in a single phase or a poly phase. (Toufouti, et al, 2013).
In construction, the motor has a stator which is the stationary portion consisting of a frame that houses the magnetically active angular cylindrical structure called the stator lamination. It stack punched from electrical steel sheet with a three phase winding sets embedded in evenly spaced internal slots.

The rotor which is the rotatory parts of a motor is made up of a shaft and cylindrical structure called the rotor lamination. It stack punched from electrical steel sheet with evenly spaced slots located around the periphery to accept the conductors of the rotor winding (Ndubisi, 2006).

The rotor can be a wound type or squirrel cage type. in a poly phase motor, the three phase windings are displaced from each other by 120 electrical degrees in space around the air-gap circumference when excited from a balanced poly phase source, those windings (stator winding) will produce a magnetic field in the air-gap rotating at synchronous speed as determine by the number of stator poles and the applied stator frequency (Bimal, 2011).

In the controlling of electrical motor; the introduction of micro-controllers and high switching frequency semiconductor devices, variable speed actuators where dominated by DC motors.

Today, using modern high switching frequency power converters controlled by micro-controllers, the frequency phase and magnitude of the input to an AC motor can be changed and hence the motor’s speed and torque can be controlled. AC motors combined with their drives have replaced DC motors in industrial applications because they are cheaper, better reliability, less in weight, and lower maintenance requirement. Squirrel cage induction motors are most generally used than all the rest of the electric motors as they have all the advantages of AC motors and they are easy to build.

The main advantage is that motors do not require an electrical connection between stationary and rotating portion of the motor. Therefore, they do not need any mechanical commutators to the fact that they are maintenance free motors. The motors also have lesser weight and inertia, high efficiency and high over load capability. Therefore, they are cheaper and more robust, and less proves to any failure at high speeds.

Furthermore, the motor can be used to work in explosive environments because no sparks are produced.
Taking into account all the advantages outlined above, induction motors must be considered as the perfect electrical to mechanical energy converter. However, mechanical energy is more than often required at variable speeds, where the speed control system is not a trivial matter.

The effective way of producing an infinitely variable motor speed drive is to supply the motor with three phase voltage of variable amplitude.
A variable frequency is required because the rotor speed depends on the speed of the rotating magnetic field provided by the stator. A variable voltage is required because the motor impedance reduces at low frequencies and the current has to be limited by means of reducing the supply voltage. (Schauder, 2013).

Before the days of power electronics, a limited speed control of the motors was achieved by switching the three stator windings from delta connection to star connection, allowing the voltage at the motor windings to be reduced. Induction motors also available with more than three stator windings to allow a change of the number of pole pairs.

However, a motor with several windings is very costly because more than three connections to the motor are needed and only certain discrete speeds are available. Another method of speed control can be realized by means of a wound rotor induction motor, where the rotor winding ends are brought out to slip rings (Malik, 2013). However, this method obviously removes the main aim of induction motors and it also introduces additional losses by connecting resistor or reactance in series with the stator windings of the motors, poor performance is achieved.

With the enormous advances in converters technology and the development of complex and robust control algorithms, considerable research effort is devoted for developing optimal techniques of speed control for the machines. The motor control has traditionally been achieved using field oriented control (FOC).

The method involves the transformation of stator currents in such a manner that is in line with one of the stator fluxes. The torque and flux producing components of the stator currents are decoupled, such that the component of the stator current controlling the rotor flux magnitude and the component controls the output torque will differ (Kazmier and Giuseppe, 2013).

The implementation of this system however is complicated. The FOC is also well known to be highly sensitive to parameter variations. It also based on accurate parameter identification to obtain the needed performance.

Another motor control techniques is the sensor less vector control. This control method is only for both high and low speed range. Using the method, the stator terminal voltages and currents estimate the rotor angular speed, slip angular speed and the rotor flux. In this case, around zero speed, the slip angular velocity estimation becomes very difficult.
Motivation for the work

When we were on training in machine in our office, we are told gave us a drawing to produce a machine shaft. During the process, when we feed in a cut of 10mm to the machine, it would cut 9.5mm and when we wanted to drill a hole at the center of the job, it would drilled it off centered, we called on our supervisor after we have wasted much time, power and materials.

Surprisingly, after his supervision, he told us that backlash in the machine is responsible for that and he instructed us to use another machine which we did and got what we need immediately. Therefore, that ugly experience motivated us to research on how to reduce high ripple torque in induction motor which is the main causes of vibrations that lead to the backlash in the industrial machine.

Statement of the problem

  • The statement of the human problem our research work will solve is to reduce backlash in industrial machine.
  • Explanation of the problem

Backlash

Backlash can be defined as the maximum distance or angle through which any part of a mechanical system may be moved in one direction without applying appreciable force or motion to the next part in mechanical sequence and is a mechanical form of dead band. More so, it is any non-movement that occurs during axis reversals.

For instance, when x – axis is commanded to move one inch in the positive direction, immediately, after this x – axis movement, these x-axis is also commanded to move one inch in the negative direction if any backlash exists in the x-axis, then it will not immediately start moving in the negative direction and the motion departure will not be precisely one inch.

So, it can cause positioning error on holes location, if the motion required to drill the holes causes a reversal in axis direction, it also causes loses ofmotion between reducer input and output shafts, making it difficult to achieve accurate positioning in equipment such as machines tools etc.

The main cause of this problem electrically is vibrations from electric motor as a result of high ripple torque in the induction motor.

Benefits of solving the problem

  1.  High quality products will be produce.
  2. Productivity will increase because adjustment and readjustment of machine feeding handle or feeding screw to eliminate backlash have been reduced.
  3. Operational cost will reduced.
  4. Greater efficiency will be guaranteed.
  5. Greater accuracy and precision of product will be guaranteed.
  6. Wasting of materials will be highly reduced.

Research objectives

  1. To develop a model that will control the error to achieve stability using DTC and fuzzy logic with duty ratio.
  2. To determine the error in the torque of the machine that causes vibration which lead to backlash that result in production of less standard products.
  3.  To determine the position of the stator flux linkage space vector in the poles of the induction motor.
  4. To determine the stator linkage flux error in the induction motor that also causes vibration.
  5. To simulate the model above in the Simulink environment and validate the result.

 Scope and limitation of the work

This project work is limited to the use of fuzzy logic controller with duty ratio to replace the torque and stator flux hysteresis controllers in the conventional DTC techniques. The controllers have three variable inputs, the stator flux error, electromagnetic torque error and position of stator flux linkage vector. The inference method used was the Mamdam fuzzy logic inference system. The deffuzzification method adopted in this work is the maximum criteria method.

Significance of the work

The importance of this work in industry where induction motor drives are mainly in application cannot be over emphasis.
As earlier noted, induction motors because of their ruggedness simple mechanical structure and easy maintenance; electrical drives in industries are mostly based on them.

Also, a wide range of induction motor applications require variable speed, therefore induction motor speed, if not accurately estimated will affect the efficiency of the overall industrial processes. Equally, the harmonic losses if not put in check will shorten the life p and efficiency of the motor inverter.

Based on the above, it is aimed at reducing the principle causes of the inefficiency in the DTC induction motor and improves the performance of the system.

Organization of the work

The work is organized into five chapters. Various control techniques were discussed in chapter two, in chapter three, we discusses the methodology, design and implementation of the direct torque control of induction motor using fuzzy logic with duty ratio controller.
Chapter four discusses data collection, analysis and the simulated results showing the system using conventional method of control and the proposed fuzzy logic with duty ratio method of control under applied load torque conditions.

Read more

Study of government backed initiatives to promote female participation in Physics and Mathematics

Introduction

This essay aims to explore the UK based initiatives designed to promote female participation within Science, Technology, Engineering and Mathematical (STEM) disciplines focusing predominately on Physics. The essay will consider the different teaching techniques and styles that have been researched and implemented in order to appeal specifically to a female audience and their relative success in terms of encouraging females to pursue both higher education in STEM based disciplines and careers.

It is noticeable within numerous records and statistics that women in STEM based subjects are under-represented which has lead to an absence of females actively employed within STEM careers. In 2008, women made up only 12.3 per cent of the STEM workforce. This is, however, an increase of 2.0 percentage points since 2003 (Kirkup, et al., 2010. Women and men in science, engineering and technology: the UK statistics guide 2010. Bradford: the UKRC) showing that there has been some successful work towards encouraging females towards STEM careers. This under-representation is no more apparent than within the science discipline of Physics, which displays the persistent problem of a lack of girls continuing to study physics beyond the age of 16 (physics is a compulsory part of the GCSE curriculum). It has been recognised that a significant number of girls actually out perform boys at Key Stage 4 within science, but this is not transferred into the desire to study physics into Key Stage 5 (post-16). In 2005, only 14% of girls who were awarded an A* or A for GCSE Double Award Science or physics progressed to A level physics (Hollins et al., 2006). The Institute of Physics have released figures indicating an incremental yearly increase in the number of A level physics candidates between 2006 and 2008 but there has been little change in the proportion of girls that have taken the subject post-16. In 2008, only 22% of the entries for A-level Physics were female (Institute of Physics, 2008). These statistics can be seen clearly in the appendix where the number of female entries in 2008 actually illustrates a decrease in female uptake in comparison to 2007 of -0.3%. In addition, recruitment to biology has remained relatively stable with more females than males being entered for A-level examinations. Chemistry entries for both male and females are relatively equal and mathematics still sees a top-heavy male count, although less dramatically than physics.

There has been an extensive amount of research into the potential reasons behind the consistently low numbers of females within Physics. The development of institutionalised education in England was based on principles of class and gender differentiation (Purvis, 1981) and many scholars attribute existing gender culture today to their historical roots where it was the norm for middle class girls to undertake roles as wives and mothers of society’s privileged gentlemen. Consequently, physics, with its high mathematical content and often abstract ideas, was a subject thought suitable only to males with girls focusing on the more subjective areas of science such as the moral aspects including religion and how science can be used to improve domestic life. Many still believe connotations of this attitude exist today and while it is important to recognise that although ‘educational policy may change, what students, their parents and their teachers have come to understand as appropriate ways for girls and boys to be, to know and to behave, will continue to reflect the historical roots of the culture’ (Murphy,P.,Whitelegg,E .,2006). In addition, research by Alison Kelly (1987) identifies three factors that appear to account for a lack of interest by women in science, namely women see it as likely to be difficult, masculine, and impersonal. A number of modern day initiatives and specific teaching techniques have been coined to address these misconceptions and will be explored, with their relative success critiqued, in the remaining body of the essay.

Many initiatives to encourage female participation in science try to address the causes of the phenomena known in academia as the ‘leaky pipeline’. The phrase has been devised to illustrate what statistics clearly show, much like a ‘leaky pipeline’, women steadily drop out of the science educational system, which carries students in secondary education through to higher education and then onto a job in STEM.

Figure 1 illustrates the risks that may be experienced by women already in the science pipeline upon commencement of a STEM based career.

Figure 1: An example of The Leaky Pipeline

Source: International federation of university women [image online] Available at:< http://www.ifuw.org/imgs/blog/blog_leaky_pipeline.jpg> [Accessed 16 April 2011].

Pell (1996) acknowledges that much of the selection between men and women has taken place even before academia is entered arguing that critical phases in the selection towards an academic career include early childhood, adolescence, school years and the job entry period. Pell gives development of self-esteem in early life-course, student-teacher interaction in classrooms leading to lower aspirations amongst girls, fewer female role models, and conflicts with family responsibilities, as some of the reasons for the ‘leak’ in the pipeline. Blickenstaff. J (2005) argues alternatively that ‘no one in a position of power along the pipeline has consciously decided to filter women out of the STEM stream, but the cumulative effect of many separate but related factors results in the sex imbalance in STEM that is observed today’. Many believe the ‘leakage’ from the pipeline requires a multi-faceted solution, and time is needed to allow modernisations in teaching and learning to take effect, only then will this be evident within the statistics often used to prove such initiatives have failed. It can be questioned whether the merit of such initiatives can so quickly be analysed and concluded as failures if they have not had sufficient time to evolve. For example, the increase of girls choosing to study physics may only see an increase in numbers once teaching practices, academic relevance of the syllabus and functional support networks are truly aligned together and are sustainable. This issue has been further addressed by Cronin and Roger (1999) who debate the focus of various initiatives aiming to bring women and science together. They conclude that many of these initiatives are flawed as they tend to focus on one of three areas: attracting women to science, supporting women already in science, or changing science to be more inclusive of women and hence the other(s) areas are ignored. A.Phipps (2008) reasons that the ‘important initiatives designed to address the problem are under-researched allowing little opportunity for educational practitioners, activists, policy-makers and scholars to analyse and learn from the practices and policies that were developed over the past decade’.

Outside of the classroom, many initiatives and organizations have been set up to encourage, support and engage women within STEM careers. One of the most well-known and long running initiatives, Women In Science and Engineering (WISE) was founded in 1984. The aim of WISE, as it is more commonly known, is to encourage the understanding of science among young girls and women and achieve an overall impact capable of promoting STEM based careers as both attainable and stimulating for women. WISE deliver a range of different options and initiatives in order to achieve their inherent strategy and openly work with other organisations, where appropriate, in a bid to accomplish this. They provide many resources for girls, teachers and parents. These various resources and much more can be found on their website . It has been noted that there is inadequate work appraising the impact of WISE policies since the organization began. Phipps (2008) suggests that ‘although school visits by WISE did have a positive effect on girls’ opinions of science this was not translated into long term change in their career ambitions’. Alternatively, WISE claim that an increase in female engineering graduates, from 7% in 1984 to 15% today, can be attributed to the success of the campaign believing that the WISE programmes inherent accomplishments can only be measured using the proportions of engineering students and engineers who are female (WISE, 2010). To date, however, there has been no onward tracking of participants from the WISE outlook programme. This leads others to be more critical with Henwood (1996) claiming WISE have ‘inadvertently limited the ways in which girls and women could discuss the challenges they faced’ and with no detailed research evaluating whether various actions and policies by WISE have produced the impact, it can be hard to attribute the growth to WISE without questioning whether these were a result of other elements present at the time. Phipps (2008) echoes this uncertainty stating ‘it is difficult to definitely conclude that WISE policies have been the decisive or contributory factor in encouraging female participation in scientific careers’.

The UK government is committed to remedying the current situation assisting with the launch, in 2004, of the UK Resource Centre (UKRC) for Women in SET (science, engineering and technology). This organisation aims to provide practical support and help in order to encourage more women to take up a career in STEM (UKRC, 2007; Wynarczyk, 2006, 2007a). It must be noted that the UKRC is principally concentrated on the participation of women in STEM careers and its responsibility does not include education. The UKRC is prominent in collecting evaluative data to allow the programmes attainments to be monitored, this includes recording the numbers of women with whom it has engaged in its work, in addition to statistics on the outcomes for returners in its programmes (UKRC, 2010).

Many have criticized the large number of non-governmental organisations and initiatives involved in the STEM sector stating that the process is disjointed and ungainly with the consequence that some policies and initiatives may be unable to reach their full potential. The STEM Cross-Cutting Programme also concluded that ‘at the current time there are far too many schemes, each of which has its own overheads’.(DfES, 2006a: p.3). Despite this, the Government has markedly increased its STEM education budget and the activities in which it supports, in an attempt to reverse the current STEM trends. This includes cash initiatives to encourage more physics trained teachers, (Jha,A,. Guardian online 2005 ‘New incentives for maths and physics teachers’ [Available online] ).

Within the current UK educational system, educators have been promoting programmes like Girls Into Science and Technology (GIST) and Computer Clubs for Girls (CC4G) for many years in an attempt to get more girls into science. The later is and organisation led by employers and it is not run for profit. The government issues its licenses with the Department for Children Schools and Families (DCSF) currently funding it. Furthermore, the UK Government is providing support for schools to encourage more girls to study physics and to help them to become more confident and assertive in the subject. Methodologies for teaching physics with an emphasis on physics as a ‘socially relevant and applied subject has led to higher attainment for both males and females’ conclude Murphy and Whitelegg (2006). Previous research has also indicated that girls are motivated to study physics when they can see it as part of a ‘pathway to desirable careers’ (Murphy and Whitelegg, 2006). Successful approaches to making physics more relevant to girls included, as presented in the government commissioned ‘Girls into physics-Action research’:

Source: Daly.A et al 2009, Girls into physics- Action Research, Research brief. Page 2. [Available online] << http://www.education.gov.uk/publications/eOrderingDownload/DCSF-RB103.pdf>

However, several challenges are related to these approaches. Some students, especially those of a younger age group, struggle to articulate their careers aspirations and there may also be a knowledge deficiency on behalf of the teachers about possible career options suitable for students that partake in physics courses. This could add pressure onto the teacher as they feel the need to research and bring these elements into their lesson planning and schemes of work (SoW). It is already well documented about the time constraints many teachers experience with regards to sufficient planning and marking time. It could be suggested that with the low number of trained physics teachers available within the educational system at this time and their high demand (Institue of Physics, Physics and: teacher numbers, 2010), that additional content beyond that of the curriculum could put viable trainees off this career and potentially push them into other subject areas where there is less additional material to deal with. Availability of school resources could also be a problem.

The ‘Girls into physics action research’ commissioned by the Institue of physics and undertaken by Daly.A., et al (2009) aims to address five key assumptions that girls have about physics identfied in prior research by Murphy,P and Whitelegg,E (2006). This essential practice (figure 2) is deemed to support female participation within physics and it is hoped that it will be adopted as part of the classroom management.

Figure 2: Essential practice that supports girls participation in physics

Source: Daly.A., et al 2009, GIRLS INTO PHYSICS – ACTION RESEARCH, Figure 2, page 6.

[Available online]

< http://www.education.gov.uk/publications/eOrderingDownload/DCSF-RR103.pdf>

The research, also carried out on behalf of the Department for Education (DfES), recommends numerous ‘top tips’ for successful teaching and learning with these suggestions available to view in the appendix. These tips have been identified by teachers who have shown some success in enagaing female students.

Alternatively, B. Ponchaud (2008) conducted a review within schools where the female uptake of physcis was already particularly high. Ponchaud identified several top tips for teachers to use to engage female students.

Figure 2: Essential practice that supports girls participation in physics

Source: Daly.A., et al 2009, GIRLS INTO PHYSICS – ACTION RESEARCH, Figure 2, page 6.

[Available online]

< http://www.education.gov.uk/publications/eOrderingDownload/DCSF-RR103.pdf>

The research, also carried out on behalf of the Department for Education (DfES), recommends numerous ‘top tips’ for successful teaching and learning with these suggestions available to view in the appendix. These tips have been identified by teachers who have shown some success in enagaing female students.

Alternatively, B. Ponchaud (2008) conducted a review within schools where the female uptake of physcis was already particularly high. Ponchaud identified several top tips for teachers to use to engage female students.

Table 2: B.Ponchard’s top tips to engage female students in physics

Source: Ponchaud, B, The Girls into Physics project. School Science Review, March 2008, 89(328)

Antonia Rowlinson from St Anthony’s RC girls’ school implemented the ‘top tips’ without the need to alter the curriculum. Physics was contextualised or illustrated in the areas of interest revealed by Ponchaud’s investigation. For example, within the forces module, questions on friction were set in the context of the then current Strictly Come Dancing television programme. The follow-up survey showed that ‘whilst this new teaching technique had not substantially shifted the students’ perceptions about physics there were improvements. More girls saw physics as relevant to their career aspirations’ (Ponchaud 2008).

In conclusion, evidence clearly shows that an under-representation of females is a cause for concern. Girls perceive themselves to be less capable and less interested, than boys, in science and these attitudes can be attributed to historical views of women that are proving hard to dismiss. Many believe that science educationalists have an obligation to alter those factors under their control. One would hope that within time, individual actions by teachers will help girls to break down the challenges experienced within the STEM pipeline and result in equal participation, benefiting society. Teachers should pay attention to the way they address and present physics, watching out for language and terminology, which has a vast psychological effect for females who may suffer from stereotype threat, where females believe they are not as capable as there male counterparts. I have also explored the idea that girls respond to physics when it is taught in an accessible and socially relevant way but countered this with the argument of teaching time constraints and available school resources.

Research that examines the overall successful impact of initiatives and policies aimed at promoting the cause of women in science has provided a mixture of opinions and outcomes that can be open to critique. It seems apparent that although these initiatives specifically target the thoroughly researched reasons why females may disengage from physics and science as whole, they cannot systematically prove that the apparent incremental growth in participation figures are down to the programmes and measures they have put in place. Only recently, has initiatives such as UKRC began to collect evaluative data on the amount of women that have been effected by their work. Some copies have presumed a positive impact for various policies, stating an increase in the proportions of women choosing certain courses as confirmation for different policies’ success (e.g. WISE, 2010). I have explored such critique on this view including Phipps (2008) who recognises ‘the limited successes and impact of initiatives in general, but tempers this with statements acknowledging the wide range of challenges facing these initiatives’. I believe that when more organisations begin to record and monitor engagement rates as a direct result of exposure to a particular initiative, successful programmes will become more apparent. However, I also realize that many of these organisations have limited funding and capabilities disabling them from doing this as they focus budgets on areas addressing there inherit strategy. Until this is addressed with additional funding, I fear the exact effects of many of these initiatives will never be known and it will remain a subject for academic discussion.

References

Blickenstaff, J C (2005). Women and science careers: leaky pipeline or gender filterGender and Education Vol. 17, No. 4, October 2005, pp. 369–386

Cronin, C. & Roger, A. (1999) Theorizing progress: women in science, engineering, and technology in higher education, Journal of Research in Science Teaching, 36(6), 639–661.

Computer Club for Girls. Accessed on 16/04/2011 < http://www.cc4g.net/>

Daly.A ,Laura Grant.L2 and Karen Bultitude. K, GIRLS INTO PHYSICS – ACTION RESEARCH, Research brief. [Available online]

Daly.A ,Laura Grant.L2 and Karen Bultitude. K, GIRLS INTO PHYSICS – ACTION RESEARCH,[Available online]

< http://www.education.gov.uk/publications/eOrderingDownload/DCSF-RR103.pdf>

DfES, (2006a), ‘The Science, Technology, Engineering and Mathematics (STEM) Programme Report’, HMSO, ISBN: 978-184478-827-9

Henwood, F. (1996), ‘WISE ChoicesUnderstanding occupational decision-making in a climate of equal opportunities for women in science and technology’, Genderand Education, 8 (2), 119-214.

Hollins, M., Murphy, P., Ponchaud, B. and Whitelegg, E. (2006) Girls in the Physics Classroom: A Teachers’ Guide for Action. London, Institute of Physics

Institute of Physics (2010) Physics and: teacher numbers, An Institute of Physics briefing note:

Institute of Physics (2008) Year on year increase of physics A-level entrants. Available from:

Kelly, A. 1987,Science for girlsPhiladelphia, PA: Open University Press

Kirkup, G., Zalevski, A., Maruyama, T. and Batool, I. (2010). Women and men in science, engineering and technology: the UK statistics guide 2010. Bradford: the UKRC.

Murphy, P. and Whitelegg, E. (2006) Girls in the Physics Classroom: A Review of the Research on the Participation of Girls in Physics. London, Institute of Physics

Murphy., P and Whitelegg., E (2006) ‘Girls and physics: continuing barriers to ‘belonging”, Curriculum Journal, 17: 3, 281 — 305

Pell AN (1996). Fixing the leaky pipeline: women scientists in academia. Journal of animal science, 74 (11),

Phipps, A. (2008). Women in Science, Engineering, and Technology: three decades of UK initiatives. Stoke on Trent: Trentham Books

Ponchaud, B, The Girls into Physics project. School Science Review, March 2008, 89(328)

Purvis, J. (1981) The double burden of class and gender in the schooling of working-class girls in nineteenth-century England 1800–1870, in: L. Barton & S. Walker (Eds) Schools, teachers and teaching (Barcombe, Falmer Press).

Women in Science and Engineering (WISE). Accessed on 16/04/2011

Women in Science and Engineering Research Project. A publication by The Scottish Government.

Accessed on 16/04/2011

Wynarczyk, P. (2006), “An International Investigation into Gender Inequality in Science, Technology, Engineering and Mathematics (STEM)”, Guest Editor, Journal of Equal Opportunities International, Special Issue, Volume 25, issue 8, December.

Wynarczyk, P., (2007a), ‘Addressing the “Gender Gap” in the Managerial Labour Market: The Case of Scientific Small and Medium-sized Enterprises (SMEs) in the North East of England’, Management Research News: Communication of Emergent International Management Research, v.30:11, 12

Wynarczyk, P and Hale 2009, Take up of Science and Technology Subjects in Schools and Colleges: A Synthesis Review. Commissioned by: Economic and Social Research Council (ESRC), and the Department for Children, Schools and Families (DCSF)

Read more

What are the factors that Influence Slope Stability?

Table of contents

CHAPTER 1: INTRODUCTION

1.1 Overview

A slope is a ground surface that inclines either may be natural or man-made. Each slope has its own soil characteristics and geometric features, in order to resist gravity or collapse. Soil mass will move slowly or suddenly without any signage downward and outward when slope failure occurred. Slides usually begin from hairline tension cracks, which propagate through the soil layers (Das,1994). Slope failures have caused an unquantified number of causalities and economic loss. However, in rural area and less populated less effect of mass movements, only being part of natural degradation of the land surface. In the case of coastal cliffs instability involving the destruction of property is often accepted due to the costs of resisting natural erosion process with cliff stabilization measures are prohibitive.

Factors that Influence Slope Stability

Figure 1, (Tulane University, Prof. Stephen A. Nelson, 6 oct 2010)

Gravity is the main force for mass wasting. Its acts everywhere on the earth’s surface, and then pulls everything in direction towards the middle of the earth’s. By looking at the figure 1, gravity acts downwards on the flats surface. Therefore the materials will not moving under the force of the gravity. It will be different case when a material is placed on a slope, by resolving the force of gravity to two component acting perpendiculars and tangential to the slope.

The figure 2 below shows the steps of how the two components resolved:-

Figure 2, (Tulane University, Prof. Stephen A. Nelson, 6 oct 2010)

The Perpendicular component of gravity, gp, acts to hold the material in place on it position. Meanwhile the tangential component of the gravity, gt, and cause shear stress parallel to the slope pulls the material acting downwards direction as shown in figure 2.
When the angle slope increase, the shear stress or the tangential component of gravity, gt also increases but the perpendicular component of gravity, gp will decreases.
Force that resisting the movements down the slope are classified as shear strength as it involves the cohesion and frictional resistance surrounded by the particles that make up the object.
The material will move down-slope when the sheer stress is larger than the total forces holding the object on the slope. However, when the materials like soil, clay, sand, silts and etc, the shear stress will become higher than the cohesional forces which hold, the particles will flow down slope and separate.

Consequently, the down-slope movement is preferential by the steeper slope angles the stress also increase and everything that decreases the shear strength for example by lowering the cohesion among the particles or the frictional resistance. In other words, it is often known as the safety factor, Fs, the ratio of shear strength to shear stress.

Fs = Shear Strength/Shear Stress

When the value of safety factor less than 1.0, it indicates that slope failure is expected.

1.2 Aims

To perform the different method of slope stability analysis used by both programs.
To design the suitable parameters of soil
To obtain factor of safety which should be equal to one (FOS= 1.0)
To compare the outcome run by both software.
CHAPTER 2: LITERATURE REVIEW

2.1 Introduction

Due to the consequences of slope failure, the topic has received extensive treatment in the literature. Several models and analytical techniques have been developed to describe a variety of geometric and soil characteristics. The majority of literature focuses on deterministic evaluation of slope stability, however, with the new technology nowadays slope stability can be determine or predict factor of safety of the soil strength just simply entering the parameters.

For this project the programmed used to analysis the slope stability had used different method approached to solve the factor of safety required for the problems being analysed. In Limitstate:Geo 2.0,Discontinuity Layout Optimization (DLO) method was used to approaches the problems. Meanwhile, in Geostudio 2007 the method use was General Limit Equilibrium. Thereafter, the factors of safety equations were presented to highlight the importance of modelling assumptions.

This chapter presents a review of slope stability analysis methods, including determining the factor of safety for the soil strength and the designing the soil parameters. The variability within soil parameters is summarized in this review. Finally, several case studies of slope stability analysis are summarized.

The slope stability can be analyse nowadays by using a computer program. LimitState: Geo 2.0 and Geostudio 2007 are examples programs can be used to analyse the slope stability. Drained and un drained analysis is required to be analyse to find out the factor of safety of the soil strength such that the slope collapses with soil parameters .The parameters of the soil properties have to be design until the factor of safety obtained equal to one (F.O.S =1).

2.2 Background study

For this project, there were two computer softwares used for modeling the slope stability problems named Limitstate: Geo 2.0 and GeoStudio 2007. By using this new technology being developed, modelling and calculating the factor of safety and the adequacy factor of the soil have been made easier to be obtained.

These both softwares produce the same aim to produce the outcome of the slope stability analysis but the term used for the results were different. In LImitState: Geo 2.0 the output produced was the adequacy factor and Geostudio 2007 gave factor of safety as it results.

Adequacy factor is the factor by which specified load, material self weight must be multiplied by to cause collapse (Limitstate: Geo 2.0, 2010).

Factor of safety is defined as the ratio of the available shear resistance (capacity) to that required for equilibrium (Geostudio, 2007).

In GeoStudio 2007 (SLOPE/W), the minimum factor of safety of moment and force produced was obtained by using different method such as Bishop simplified method, Ordinary method, Janbu method and M-P method. In this software, the minimum factors of safety to be focus only the Morgenstern-Price method. Meanwhile, in Limitstate: Geo 2.0 the outcome produced was obtained by using the discontinuity layout optimize method (DLO).

2.2.1 How Discontinuity Layout Optimized works?

In Limitstate:Geo 2.0, Discontinuity Layout Optimization is a solution engine to analyse the slope stability problems. This procedure was developed at University of Sheffield. Beside that this method can be used to identify critical transitional sliding block failure mechanism with no limitations.

Discontinuity layout optimization is a limit analysis method that effectively allows free choice of slip-line orientation, and the critical solution identified may involve the failing soil mass being divided into a large number of sliding blocks. Accuracy can be assessed by determining the influence of nodal refinement. DLO also readily handles variation of soil parameters, and heterogeneous bodies of soil.

Discontinuity Layout Optimized procedures occupy a few steps as shown in diagram below.

There are number of steps of procedure involved in DLO. The initial step of the procedure is by identifying complex failure patterns through the set of potential discontinuities. In terms of equilibrium relations or in terms of displacements, DLO can be formulated. The aim of the mathematical optimization problem in the’ kinetic’ formulation(i.e DLO which is formulated in terms of displacement) is to minimize the internal energy degenerate along discontinuities, focus in the direction on nodal compatibility constraints. This can be solved via resourceful linear programming techniques and, when combined with an algorithm initially developed for bind layout optimization problems, where modern computer power can be used to directly search through very large numbers of different malfunction mechanism topologies.

2.2.2 How General Limit Equlibrium works?

Formulation used in general limit equilibrium was developed by Fredlund at the University of Saskatchewan in the 1970s (Fredlund and Krahn 1977; Fredlund et al. 1981). This technique includes the key elements of all the other methods available in the slope stability analysis in Geostudio 2007.

There are several general limit equilibrium (GLE) methods have been developed for slope stability analyses purposes. They are:-

i) Fellenius (1936) which introduced the first method. It is also known as the Ordinary or the Swedish method and used for a circular slip surface.

ii) Bishop (1955) had advanced the first method by introducing a new relationship for the base normal force. Hence, the equation for the FOS hence became non-linear.

iii) Janbu (1954a) developed a simplified method for non-circular failure surfaces, dividing a potential sliding mass into several vertical slices. The generalized procedure of slices (GPS) was developed at the same time as a further development of the simplified method (Janbu, 1973).

iv) Morgenstern-Price (1965),

v) Spencer (1967),

vi) Sarma (1973) and;

Several others made further assumptions and modification on the interslice forces. As for example, a procedure of General limit equilibrium (GLE) was developed by Chugh (1986) through the extension of the Spencer and Morgenstern-Price methods, which satisfying both moment and force equilibrium conditions (Krahn 2004, Abramson et al. 2002).

In this general limit equilibrium, the formulation is based on two factors of safety equations. The equations are:

i) Factor of safety with respect to moment equilibrium (Fm).

Eq 1.0

ii) Factor of safety with respect to horizontal force equilibrium.

Eq 1.

The terms in the equations are:

c’= effective cohesion

?’= effective angle of friction

U= pore-water pressure

N = slice base normal force

W = slice weight

D = concentrated point load

?, R, x, f, d, ? = geometric parameters

? = inclination of slice base

Both moment and force equilibrium have to be achieves by finding the cross-over point of the Fm and Ff curves.

2.2.2.1 Morgenstern-Price Method

For the analysis Morgenstern-Price Method was used to determine the Factor of Safety of the slope due to it allowed for various user-specified inter-slice force functions. The inter-slice functions available in SLOPE/W for use with the Morgenstern-Price (M-P) method are (1) Constant, (2) Half-sine, (3) Clipped-sine, (4) Trapezoidal and (5) Data-point specified. By selecting the Constant function makes the M-P method identical to the Spencer method (Geo-Studio, 2007).

Morgenstern and Price proposed an equation to be used to handle the interslice shear forces. The equation is:

The terms are:-

f(x) = a function,

l = the percentage (in decimal form) of the function used,

E = the interslice normal force, and

X = the interslice shear force.

In SLOPE/W the values of lambda is varies from -1.25 to +1.25 but sometimes it is needs to be narrowed due to it is not possible to reached a achieved solution at the boundaries of the range. Consequently, the general limit equilibrium helps in understand the differences between the various methods and understand what is happening behind the scenes.

Figure XX shows how the moment and force factors of safety vary with lambda. The M-P Factor of safety occurs where the two curves across.

Once the X value obtained it must match with the inter-slice shear value (E2) on the free body diagram as shown in figure…. As with the Spencer method, the force polygon closure is very good with the M-P method, since both shear and normal inter-slice forces are included (Geo-Studio, 2007).

CHAPTER 3: RESULTS

3.1 Introduction

In this chapter, it presents the results obtained from the analysis done using both programmed. There were some cases analyzed to compare the results produce by both programmed. First case, the problems to modeling the analysis must obtained the factor of safety equal to one. The parameters given for each drain and undrained analysis stated in the problem. Meanwhile the rest of the case shows the different value obtained for factor of safety with same case using both programmed.

3.2 Case 1: one layer of soil and changing parameters to obtain F.O.S is 1

Material strength properties:

Table 1: soil material strength properties

Results for drained analysis

Table 3: Minimum factor of safety for undrained analysis

From the results obtained from the data shown above does not give the factor of safety of the slope stability equal to one. Hence tables below shows the changes of parameters for both drained and undrained analysis in order to obtained the factor of safety equal or near to one.

Drained analysis:

Outcome produce by both programmed as the phi is changing but the remaining parameters stay constant.

Outcome produce by both programmed as the slope is increase but the rest of the parameters stay the same.

Undrained analysis:

The table below the results obtained for the undrained analysis to obtained factor of safety equal or nearly to one as changing the undrained shear strength:-

3.3 Case 2: two different layers of soil with different parameters.

Figure 5

Material strength properties:

Results:

CHAPTER 5: DISCUSSION

5.1 Case 1

In the case scope 1, with the materials properties given as shown in table……its shows that the minimum factor of safety obtained for drained analysis by using limitstate: Geo 2.0 is 2.225 compare to Geostudio 2007 which is 1.064. Meanwhile for undrained analysis the results obtained was only slightly different which is different by 0.013. The main point for this case 1 was to obtain minimum factor of safety equal to one for both drained and undrained analysis.

Therefore in order to obtain the factor of safety equal to one for the slope stability for drained analysis there are two factors to be considered. Firstly, change the drained friction angle (phi) but maintained the other soil parameters constant. Secondly, maintained drained friction angle and the other parameters constant but change the angle slope of the problems. Meanwhile for the undrained analysis the same analysis was carried out but in this case was to obtain the factor of safety equal to one; only the undrained shear strength was changed.

From the results gathered in table….It shows that when the phi was changed and the others parameters remain the same, the results obtained was different in order to obtained the near ideal factor of safety, where by using Limitstate: Geo 2.0 the phi values to obtain was 29.5°. Meanwhile, Geostudio 2007 produced the factor of safety nearly to one when the phi was 30°.

For the case of changing the slope angles and the remaining material strength properties stay the same, the results was tabulated in table….It shows that in Limitstate: Geo 2.0 the factor of safety close to one was 29.5°. Meanwhile, in Geostudio 2007 the slope angles obtained was one degree smaller than the Limitstate: Geo 2.0.

5.2 Case 2

In the case scope 2, the minimum factor of safety produced by Limitstate: Geo 2.0 was 75 % higher than Geostudio 2007 due to different highest inter-slice forces obtained during stimulate the analysis. In Geostudio 2007, the highest forces was…..Meanwhile, the biggest forces value in Limitstate: Geo 2.0 was……

Besdie that,

5.3 General discussion different FOS obtained from both programmed.

As mentioned earlier, the method approached to find the factor of safety by both programmed were different make the solutions obtained from any analysis also different. Hence, the solution obtained might small or huge differences between the outcomes produce by both programmed.

In Geostudio 2007, Morgenstern-Price (M-P) method was used to perform the analysis. In this method, shear and normal inter-slice forces were considered and satisfies both moment and force equilibrium. Not only that, it allows variety of user-selected inter-slice force function. Simpler methods do not include all inter-slice forces and do not satisfy all equations of equilibrium sometimes can be on the unsafe side (Geostudio, 2007).

In Limitstate: Geo 2.0, the accuracy of solution controlled by specified nodal density. Within the set of all possible discontinuities linking pairs of nodes, all potential transitional failure mechanism was considered. Failure Mechanisms involving rotations along the edges of the solid bodies can be identified. The critical mechanism and collapse factor of safety were determined based on upper bound theorem of plasticity.

CHAPTER 5: CONCLUSION

The results has fulfilled the aims of the study in determine the minimum factor of safety equal to one for drained and undrained analysis for case 1. Base on the FOS simulated, the both programmed give different results for the same case analysed due to the method used to run the programmed. However, though in case 1 the factor of safety produce almost similar values, it inter-sliced forces produced by both programmed will be different due to the shape of interslice form by each method will be not the same.

Safe fill slope construction required a geotechnical input by the engineers with relevant geotechnical experience during planning, design, construction and maintenance. Therefore, high factor of safety produce will give high cost but will bring up good quality over period of time.

CHAPTER 6: COMPARISON BETWEEN LIMITSTATE: GEO 2.0 AND GEOSTUDIO 2007

From the desk study analysis using both limitstate:Geo 2.0 and Geostudio 2007 programmed, there are strengths and limitations of each programmed to run slope stability analysis which are summaries in the table shown below:-

CHAPTER 7: REFERENCES

Limitstate Ltd. (October 4, 2010). LimitState:GEO Manual VERSION 2.0 http://www.limitstate.com/files/pdf/geo/GEO_Manual.pdf. Last accessed 14/3/2011.

D.G. Fredlund.(2001) The relationship between Limit Equilibrium Slope Stability Methods, Department of Civil Engineering, University of Saskatchewan, Saskatoon, Saskatchewan, Canada.

Geostudio.(2010) Stability Modeling with SLOPE/W 2007 Version, Geo-Slope International Ltd., Calgary, Alberta, Canada.

Geotechnics 2 Notes: Slope stability. Belfast: Queen’s University of Belfast.

Read more

Macro-Scale Modeling

In order to analyse the response of large structures with complex architecture, macro-scale modeling approaches is preferable to save computational time when running a simulation. In general, relying on classical laminate plate theory, macro-scale modelling approach are accessible in most commercial finite element codes like Abaqus and LS-DYNA.

To model a structure with anisotropic properties using macro-scale approach, several material parameters in different directions are needed such as stiffness, Poisson’s ratios, strengths and damage parameters. Determination of all of these parameters are either from a number of experimental tests or the results obtained from a meso-scale homogenization. Prediction of the effective properties of the material using analytical methods have been reviewed by Younes.

There are several disadvantages of the macro-scale modeling approach which limit its application. Basically, fiber architecture such as the undulation and intercrossing of fiber tows in the model is neglected in this approach. Hence, detailed stress and strain solution among the constituents and localized damage or failure cannot be provided. Regardless of the drawbacks, macro-scale approach can provide efficient global response for the simulation of composites with different fiber architecture.

This is particularly evident when the analysed structure is relatively large. Macro-scale material parameters such as elastic and failure properties are very important in order to provide an accurate solution. Obtaining those parameter is tedious and costly since good experimental support is needed. Consequently, composite structure with less material parameters like transversely isotropic unidirectional composite laminate is favourable for macro-scale modelling.

Y. Shi et al  have employed the macro-scale approach to predict the impact damage of composite laminates in the form of intra- and inter-laminar cracking under low velocity impact. Essential features of the model such as stress-based criteria for damage initiation, fracture mechanics techniques for damage evolution and Soutis shear stress–strain semi-empirical formula to capture nonlinear shear behaviour of the composite have been applied.

A good agreement between numerical results and experimentally obtained curves of impact force and absorbed energy versus time have been achieved. Besides, the proposed damage evolution model also able to capture various damage mechanisms that occur after the impact.

Macro-scale FE analysis of carbon fiber reinforced composite plate have been conducted by A. Riccio et. al to predict the damage onset and evolution under low velocity impact. Both inter-lamina (delaminations) and intra-lamina failure mechanisms were predicted using the cohesive elements and Hashin’s failure criteria respectively. Moreover, to improve the accuracy of the result, a global–local technique was applied to refine the mesh in the impact zone. Comparisons between numerical and experimental results under different impact energies in terms of global impact parameters, maximum impact force, maximum impact displacement are found to be in fair agreement.

D. Feng et. al  examined the structural response and the failure mechanisms of the composite laminates subjected to low-velocity impact using the macro-scale technique in combination with the constitutive models for intralaminar and interlaminar damage modes. The results of numerical simulations were compared with experimental data obtained by drop-weight impact testing and stereoscopic X-radiography. Both the structural impact response and the major damage mechanisms over the range of impact energies provided by the numerical FE model shown a reasonable good agreement with drop-weight impact testing data.

H. Ullah et. a conducted the experimental characterisation and numerical simulations on the deformation behaviour and damage in woven CFRP composite laminates under quasi-static bending. Two-dimensional macro-scale models are developed and numerical results showed that damage initiation and evolution processes in woven laminates are in agreement with experimental data.

Yumin wan et. al studied the mechanical properties and failure mechanism of three-dimensional (3D) braided composites subjected to compressive quasi-static and high strain rate loadings. Both meso- and macro-scale model integrated with strain rate sensitive elasto-plastic constitutive relationship and ductile and shear failure criterion were developed.

Experimental data was used to verify the results obtained from both models and the results are promising. A macro-scale model of woven composite has also been developed by Xiao et. al using LS-DYNA to simulate the onset and evolution of damage. Remarkably, failure mechanisms under different type of loadings including tensile, compression and shear can be predicted with this model

In summary, even though the macroscale modeling approach is incapable to predict the behavior of the reinforcement, matrix or fiber-matrix interface, with its homogenization feature, it can act as an effective first level overall solution in the modeling framework especially for impact simulation of a large scale structure.

Prediction of detail local failure is only applicable for micro- and meso-scale model approach. Besides, higher scale model can exploit the effective material properties obtained from both methods. To be brief, a comprehensive evaluation of material failure responses can be achieved through combination of micro-meso-macro scale approaches

Read more

Critical Analysis of Mass Spectrometry

Table of contents

Introduction

1. Background:

The analytical technique that I have chosen to give an in-depth analysis of is Mass Spectrometry (MS)

This analytical technique is basically the study of ionised molecules in the gaseous phase; its main use is in the determination of the molecular weight of the molecule in the sample under investigation by accelerating ions in a vacuum environment. While this analytical technique has been around for over one hundred years there are significant advances being made to this technique in order to cater for more adverse samples which will be discussed in more detail later on. The main difference between mass spectrometry and other spectroscopy methods such as NMR is that it not dependant on transitions between energy states which may be responsible for its popularity. The diagram shown below (Figure 1.1.) [1] shows a simple diagram of a common mass spectrometer using electron ionization:

Figure 1.1 represents a schematic diagram of an electron ionization-mass spectrometer showing the various processes involved. Courtesy of www.molecularstation.com.

In its simplest form the process of determining the molecular weight of the sample typically occur over four main stages which are: Sample volatilisation, Ionisation, Separation and detection.

Sample volatilisation: The sample to be analysed if gaseous or volatile can be readily inserted into the mass spectrometer with the more solid samples requiring heating before insertation in order to construct a more volatile or gaseous sample. As can be seen form the above figure the sample is then moved further down the spectrometer towards the area where ionization of the molecules occurs.

Ionization: The sample is then hit with a barrage of high energy electrons from an electron gun with a charge of around 70 electron volts (eV). When the molecules collide with the high energy electron beam energy is transferred from the beam to the molecules which cause an acceleration of the molecules. These molecules may then dump an electron forming cation known as the molecular ion (M+•) [2]. This interchange is represented in the equation below (Figure 1.2.):

M+ e–M+• + 2e–

Moleccular Ion

This electron barrage usually results in most of the molecular ions fragmenting causing some of the fragments to not gain any charge and remain neutral and have no further part to play. The main purpose of ionisation is to donate a charge to the sample in order for the molecules to break up and become charged. The ionization method discussed here is electron ionisation however there are many other more methods of ionization which will be discussed in detail later on in my analysis.

Separation: This beam of newly charged molecular ions then proceed through a mass analyzer which in this case is a very strong controllable magnetic field which separates the charged molecules according to their mass to charge ratio (m/z) causing some of the molecules which are “too heavy” or “too light” to be thrown towards the top or bottom of the spectrometer and hence avoid detection. By varying the magnetic field, ions with different m/z values can be detected. Just like there are many different ionization methods for different applications there are also several types of mass analyzers which will also be discussed later.

A fundamental consideration in mass spectrometry at this point is mass resolution, defined as R = M/?M. where R is the resolution, M is the mass of particle and ?M is the mass difference compared to adjacent peak with overlap at 10% of peak height. Nowadays a magnetic sector analyzer can have R values of 2000-7000 depending on the instrument [3].

Detection: the final stage in the process is comprised of a detector which then amplifies and records the mass of the ions according to their m/z values. The detector may be set up for detection of molecular ions possessing different mass to charge ratios. The Molecular ions each have a mass that is almost identical to the mass of the molecule (M) and due to the fact that the charges on most of the molecules are usually 1, the value of m/z obtained for each of the ions is simply its mass. The data collected by the detector is fed to a recorder and is presented in the form of a plot of the numbers of ions versus their m/z values [3]. An example of this type of plot is shown below in figure 1.3. [4]:

Figure 1.3: A typical graph produced for a sample using mass spectrometry. Picture courtesy of www.research.uky.edu.

2. Methods of Ionization:

Electron Ionization (EI): as described above is the simplest method for converting the sample to ions and this method is found on the most common mass spectrometers. Many other simple and complex ionization methods exist for analyzing various samples. Some of these methods include:

Chemical Ionization (CI): This is a softer ionization method than EI, causing less fragmentation of the sample under investigation and hence it is mainly used for more sensitive compounds such as 2, 2-dimethylpropane for example which is prone to fragment with little stress. This decrease in fragmentation is due to the ions arising from a chemical reaction rather than bombardment and hence possesses less energy than those produced from EI. In Chemical ionization the molecules to be studied are mixed with an ionized carrier gas which is present in excess. Common carrier gases for CI include ammonia, methane, isobutene and methanol. The selection of the carrier gas depends on the degree of ion fragmentation required. Different carrier gases produce different mass spectra plots. The main advantage of CI is its softer approach lending to clearer results over EI for some samples. Other advantages include the relatively cheap and strong hardware as with EI. The main drawback of using chemical ionization in mass spectrometry is the fact that like electron ionization the sample must be readily vaporised in order for the molecules to gain that vital charge. This immediately dismisses the use of high molecular weight compounds and biomolecules [3]. It’s obvious therefore that CI and EI are very similar methods of ionization and due to this many of the modern day mass spectrometers can switch between these two methods effortlessly.

Electrospray Ionization (ESI): is a type of atmospheric pressure ionization. This technique is very useful for studying the high biomolecular weight molecules and other samples which may not be very volatile as discussed above. The sample to be investigated is sprayed through a fine capillary which has a charge on its surface, the sample then enters the ionization chamber resulting in the production of multiple charged ions along with single charged ions. This formation of multiple charged ions is very useful in the mass spectrometry analysis of proteins [3]. It is important to note that negative ions may also be formed in ESI and the operation may need to be reversed. ESI has become much more common over the last few years as it relies on a sample in solution which permits its use in LC-MS [5]. Thermospray Ionization (TSI) is closely related to ESI differing only in the fact that it relies on a heated capillary rather than a charged capillary; however ESI remains the more popular of the two methods.

Atmospheric-pressure chemical Ionization (APCI): It is obvious form the title that APCI is also a form of atmospheric pressure ionization resulting in a similar interface being used for both methods. This method was born in the 1970’s when it was first combined with liquid chromatography (LC) by Horning et al [6] who conveyed a new atmospheric ion source which used 63Ni beta emission in order to produce the required ions. Even tough APCI and ESI are harmonizing methods the main advantage APCI has over ESI is that it is more effective at determining the mass spectra for less polar compounds due to the reality that the gas phase ionization is more effective in APCI. Many MS instruments are now readily available with high mass resolution and accurate mass measurement, properties which are not as readily available with GC-MS instruments.

Fast Atom Bombardment (FAB): this type of ionization method is primarily used for large polar molecules. The sample to be studied is usually dissolved in a liquid matrix which is non-volatile and polar such as glycerol. This sample is then bombarded with a fast atom beam such as Xe– atoms which picks up electrons thus causing ionization from this reaction. This is a simple and fast method to use and is very good for high-resolution measurements. On the downside however it may be hard to compare low molecular weight compounds from the chemical back ground which is always high [5].

Desorption Chemical Ionization (DCI), Negative-ion chemical ionization (NCI), Field Ionization (FI) and Ion Evaporation are other less common ionization methods used in mass spectrometry.

3. Mass Analysers:

As described earlier the mass analyzers are used to separate the various ions according to their mass to charge ratio (m/z) and hence focus the ions with the desirable m/z value towards the detector. Some of the mass analyzers available include; Double-Focusing Mass Analyzers, Quadrupole Mass Analyzers, Time-of-Flight Mass Analyzers and Ion Trap Mass Analyzers.

Double-Focusing Mass Analyzers are used when a high resolution is of paramount importance. This high resolution is achieved by modifying the basic magnetic design. The beam of ions passes through an electrostatic analyser before or after the magnetic field causing the particles to travel at the same velocity resulting in the resolution of the mass analyzer increasing dramatically. Resolution may be varied by using narrower slits before the detector. It is important to note that this type of analyzer reduces sensitivity but increases accuracy resulting in a fine line between success and failure with regards to detection, for this reason this type of mass analyzer is only used for very selective purposes.

Quadrupole Mass Analyzers do not make use of magnetic forces for mass detection; instead they are composed of four solid rods arranged parallel to the direction of the ion beam. Using a combination of direct-current and radiofrequency the quadrupole separates the various ions according to their mass extremely quickly. Quadrupole mass analyzers are most on most GC-MS instruments.

Time-of-Flight Mass Analyzers (TOF) operate by measuring the time taken for an ion which has been produced to travel for the ion source to the detector [7]. This is based on the simple assumption that the lighter ions will have a greater velocity and thus will strike the detector first. This type of analyzer has become more and more common in recent years due to the fact that the electronics used in this analyzer have become much more affordable since it was first introduced in the 1940’s. In recent years the resolution and sensitivity of TOF have been increased by the insertation of a reflective plate within the flight tube itself [8]. The main area that this type of analyzer is used is in Matrix Assisted Laser Desorption Mass Spectrometry (MALDI-MS) discussed later.

The Ion Trap Mass Analyzer is composed of two hyperbolic end cap electrodes and a doughnut shaped ring electrode [7]. It is very similar to the quadrupole analyzer in resolution terms and basics however the ion trap is more sensitive.

4. The Mass Spectra:

The main interest that anybody has from the mass spectra is the molecular weight of the sample that was processed. The value of m/z at which the molecular ion (M+•) appears on the mass spectrum tell us the molecular weight of the original molecule. The most saturated ion formed from the ionization provides us with the tallest peak in the spectra know as the base peak (Figure 1.2). From this information the determination of very exact molecular weights of substances may be deduced which is probably the most important application of mass spectrometers. This determination also allows use to distinguish between different substances with a very similar molecular mass which we are unable to do ourselves. For example; the molecule C14H14 has a molecular mass of 182.1096 and the molecule C12H10N2 has a molecular mass of 182.0844. These two molecules may only be differentiated by MS as there is only 0.0252 in the difference even tough they are two completely different molecules. The type of MS instrument used in this case is a Double Focusing Mass Spectrometer as discussed briefly above which is capable of providing measurements accurate to 0.0001 atomic mass units. The chance of two compounds having the exact same mass spectra is very unlikely and therefore it is possible to identify an unknown compound by comparing its mass spectra obtained with that of a known library of mass spectra for various compounds.

5. Mass Spectrometry in Synergy with other Techniques:

Through the years mass spectrometers have evolved to be used not just on their own but used in tandem with a range of other analytic techniques such as Liquid Chromatography – Mass Spectrometry (LC-MS) in purity assessment and investigating rat urine, Gas Chromatography-Mass Spectrometry (GC-MS) for the detection and measurement of illicit drugs in biological fluids. It is LC-MS that has become the gold standard for detection and analyzation of substances. Gas chromatography works particularly well with mass spectrometry too, due to the face that the sample is already in its gaseous form at the interface. This system has been used by De Martinis and Barnes [9] in the detection of drugs in sweat using a quadrupole mass spectrometer which has been discussed earlier. The ability to identify metabolites in the biological fluids mentioned above can be very difficult and this is due to the fact that these metabolites are present in extremely low concentrations such as parts per million (ppm) or even less in some situations. For many years Nuclear Magnetic Resonance (NMR) was used to identify these metabolites but in recent times it would appear that mass spectrometry has become the more popular method for detection of the metabolite. This may be due to the fact that MS is more sensitive than NMR resulting in less sample amount being required.

6. Advances in Mass Spectrometry Instruments and their Limitations:

As mentioned briefly above it is very difficult to study large biomolecules such as proteins due to the fact that they are large polar molecules which are not volatile and as a result are difficult to convert to a gaseous state in order to undergo ionization. In recent years a solution to this problem has been accomplished with the introduction of Matrix- Assisted Laser Desorption Ionization (MALDI).

MALDI is a laser based soft ionization method which relies on the sample being dissolved in a solution containing an excess of matrix such as sinapinic acid which has a chromophore that absorbs at the laser wavelength, the sample is placed in the path of high intensity protons causing a collision of the atoms with the sample resulting in ionization of the sample molecules causing them to be ejected from the matrix. One of the main advantages of MALDI-MS is that only a very tiny amount of sample is required (1 X 10-5 moles) [3]. This technique has proven to be one of the most successful ionization methods for mass spectrometry analysis of large molecules due to its soft ionization ability. This technique has been used in the drug-biomolecule complexes in order to investigate the interaction properties and sites of biomolecules with various drugs on the market today [10]. This method was also used by Zschorning et al. to investigate the extracts of human lipoproteins after treatment with cholesterol esterase’s [11].

This method although very popular suffers some drawbacks. There is a strong dependence on the sample preparation method and any mistake made during sample preparation or any contamination introduced into the matrix during the sample preparation renders the rest of the investigation pointless. Another draw back of this method is the short sample life although some research has been undertaken [12] with the use of liquid matrices in the belief that this may increase the sample life by making use of the self-healing properties of the sample through molecular diffusion. One obvious drawback that may occur is the fact that the sample may not be soluble and hence may not dissolve in the matrix. This problem may be overcome with the use of compression of a finely ground sample and analyte [13]. Another disadvantage which may become of detrimental in the future is the fact that MALDI is not easily compatible with LC-MS, this problem may have to be rectified id the popularity of MALDI is to continue.

Electrospray Ionization (ESI) has been described in detail under the methods of ionization section above and it can be seen that this young technique is proving to be very useful with LC-MS to investigate the a variety of molecules including proteins, DNA and synthetic polymers.

The main problem with ESI-MS is that the mass spectra produced may contain many peaks of multiply charged ions which may cause confusion in the interpretation of spectra of some samples. The ESI instrument itself can also present with decreased sensitivity due to the presence of impurities such as salts and buffers, this is not the case with MALDI.

Although both MALDI and ESI are both very effective methods of developing mass spectra for large molecules such as proteins, MALDI still remains the method of choice for most analyses. However, as discussed above the fact that MALDI is not very compatible with LC-MS may pave the way for a surge in popularity of the LC-MS friendly ESI.

7. The Future of Mass Spectrometry:

Mass spectrometry has come along way since 1897 when Joseph J. Thompson used an early mass spectrometer to discover the electron and there is no reason why the mass spectrometer will continue to advance and evolve into the foreseeable future. The mass spectrometer is an extremely versatile analytical tool which can work in tandem and alongside other analytical methods such as chromatography seamlessly.

The main areas in which mass spectrometers have been used for quantification of compounds are LC-MS and GC-MS using the various ionization methods respectively. LC-MS is the gold standard in quantitative bioanalyses and is used by the majority pharmaceutical companies. The other minority tend to use other techniques such as High Pressure Liquid Chromatography (HPLC) and UV as they deem LC-MS to be too expensive.

An area of mass spectrometry to watch out for in the future is the use of ion-trap technology to perform LC-MS-MS to LC-MS [7]. This method already exists but reliable routine bioanalytical assays have not been produced as of yet.

References:

[1]http://www.molecularstation.com/molecular-biology-images/506-molecular-biology-pictures/21-mass-spectrometer.html

[2]Daniel C. Harris: Quantitative Chemical Analysis, sixth edition (2003) published by W. H. Freeman and Company, New York.

[3]Donald L. Pavia, Gary M. Lampman, George S. Kriz and James R. Vyvyan: Introduction to Spectroscopy, fourth edition, published by Brooks/Cole, Cengage Learing.

[4]www.research.uky.edu/ukmsf/whatis.html

[5]Ionization Methods in Organic Mass Spectrometry

[6]Horning, E.C., Caroll, D.J., Dzidic, I., Haegele, K.D., Horning, M.G., andStillwell,R.N. (1974). Atmospheric pressure ionization (API) mass spectrometry. Solvent-mediated ionization of samples introduced in solution and in a liquid chromatograph effluent stream, J. Chromatography. Sci, 12, (11), 725-729

[7]RF Venn (Ed) (2000) Principles and practice of Bioanalysis Taylor and Francis.

[8]Ashcroft, A.E. (1997) Ionisation Methods in Organic Mass Spectrometry, Cambridge, UK: The Royal Society of Chemistry.

[9]http://www.asms.org/whatisms/p1.html: The American Society of Mass Spectrometry

[10]Skelton, R., Dubols, F., Zenobl, R. Analytical Chemistry (2000), 72, 1707-1710

[11]Zschornig, Markus Pietsch, Roesmarie SuB., Jurgen Schiller and Michael Gutschow. Cholesterol esterase action on human high density lipoproteins and inhibition studies: detection by MALDI-TOF MS.

[12]Zenobi, R, Knochenmuss, R. Mass Spectrom, Rev. 1999, 17, 337-366.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp