Industrial Instrumentation On Load Cells Engineering Essay

Table of contents

Load cells are detectors which are used to mensurate the degree or force per unit area by change overing the force ( torsion or mass ) into electrical signals and so these signals are displayed by the show unit to demo the degree or force per unit area.

Load cells are besides known as burden transducers.

In dictionary, a burden cell is known as “ weight mensurating device necessary for electronic signal that displays weight in the signifier of figures. ”

Load cells can be classified harmonizing to their operations:

Load cells that utilize liquid force per unit area or air force per unit area.

Load cells that utilize snap.

Load cells that utilize a magnetostriction or piezoelectric consequence.

The strain gage burden cell is the largely used among the all sorts of burden cells.

Therefore, when we say “ burden cell, ” we are largely mentioning to strive gauge burden cells.

Although there are many other measurement devices, such as piezoelectric detectors, magnetostrictive detectors, electrical capacity detector and other detectors.

1.2-Types of Load cells

Strain gage burden cells

Tension burden cells

Pneumatic burden cells

Hydraulic burden cells

Shear Load Cells

Compression Load Cells

Bending Load Cells

Ringing Torsion Load Cells

Pancake Load Cells

Single Point Load Cells

1.3-Strain Gauge Load Cells

This is a type of burden cell which is usage to mensurate the degree of any storage vas.

1.3.1-Working rule

When force per unit area is applied on a music director its length alterations due to which opposition of the music director alterations and relation to the alteration in opposition show unit displays the alteration in degree.

1.3.2-Construction and working

A strain gage is consists of a long length music director which is arranged in zig-zag manner on the flexible membrane which is exposed to the applied force per unit area country. This music director is connected in a wheat rock p as a resistance and when force per unit area or weight is applied on the membrane which is connected to the music director it gets stretched and due to stretching the length of the music director alterations and due to alter in length the opposition of the music director additions.

These are normally four or a multiple of four, are connected into a wheat rock p constellation in order to change over the really little alteration in the opposition into the suited electrical signal.

As these gages are combination of mechanical and electrical constituents so the mechanical parts are located at the site but electrical parts are in the control suites due to their environmental and temperature sensitivenesss. And the wires used for the transmittal of the signals besides have their ain opposition so that opposition besides considered during their building.

The accommodation and arrangement of the strain gage burden cell in the wheat rock p and its working phenomena is shown in the undermentioned diagrams.

Strain gage burden cells are placed at the underside of the vass largely to mensurate the degree of the column or vas.

00204.png

Figure

Figure 00205.png

1.3.3-Advantages

Strain gage burden cells are used in automotive industry to look into the structural public presentations of the stuff used in doors, goons, short pantss etc.

Strain gage burden cells can be usage for weighing intents.

Strain gage burden cells can besides be usage for stuff testing in procedure industry besides.

Strain gage burden cells are besides used in tensile trial machines as a major constituent.

Strain gage burden cells truth is 0.07 % of the rated capacity

Strain gage burden cells can be used for both enlargement and compaction.

Strain gage burden cells are less dearly-won so largely used in the industry.

1.3.4-Disadvantages

Strain gage burden cells require uninterrupted electric energy for the production and show of signals.

Strain gage burden cells besides requires an elaboration circuit for the coevals of end product show because the signals produced by the gage itself are of really much low electromotive force about in milli Vs.

Strain gage burden cells can non be used for the force per unit area measuring of extremely reactive or caustic stuffs because they can damage the gage.

Strain gage burden cells can non be used for the measuring of really high force per unit area if the stop usage is of plastic.

1.4-Tension Load Cells

This is another type of burden cell which is besides usage to step to the degree.

1.4.1-Working Principle

It consists of a vibrating wire transducer, fixed in a thick-vessel metallic cylinder, designed to supply a extremely stable and sensitive agencies of supervising tensile tonss in weighing systems, like procedure weighing systems and batch systems.

As the applied burden additions on the burden cells the force on the internal vibrating wire besides increases by altering its tenseness, and therefore the resonating frequence of the vibrating wire. The frequence is measured and relative to the applied weight.

1.4.2-Construction and working

The chief portion of this burden cell is strain gauged stop which is for good secured in the transducer shell. The transducer is fitted with a metallic oculus leting in line connexion to the deliberation system, and a metallic hook, attached to the sensitive stop, provides a agency by which weight is applied in a suspended manner.

The burden cell is vented to the ambiance to extinguish barometric effects for the upper limit or optimal truth. The signal overseas telegram which is attached to the burden cell is connected with the control room where these signals can be monitored.

Figure There are besides some Thermistors placed inside its shell which are used to mensurate the temperature of the working fluid or vas. Degree centigrade: UsersAlY RaZaAppDataLocalMicrosoftWindowsTemporary Internet FilesContent.WordUntitled.png

1.4.3-Advantages

The chief advantage of the tenseness burden cell is that it is extremely sensitive and stable.

As it is a vibrating component wire detector and its end product is frequency so it is non affected by the alteration in the overseas telegram opposition and therefore long signal overseas telegrams are non jobs.

The frequence of the vibrating wire is measured by either the portable read-out box or informations lumberman. So it can give more accurate readings.

The end product signal scopes in the electromotive force from 0 to 5 Vs so there is no elaboration unit required and therefore the cost lessenings.

The rated capacity of the tortuosity burden cell is from 10 kilograms to 15 kilograms.

The truth of the tortuosity burden cells is A±0.1 %

The temperature ranges for the working or operation of the tortuosity burden cells is -20A°C to +80A°C.

Such types of transducers have about zero impetus and have besides really low consequence of temperature on its truth.

1.4.4-Disadvantages

This type of burden cells can non be used for high temperature fluids to happen degree or weight of fluid incorporating column.

This type of burden cells can non besides be used for high capacities or for big armored combat vehicles column or weight measurings.

This type of burden cell is extremely affected by the high temperatures due to its sensitive nature of the detection wire.

1.5-Pneumatic Load Cells

This is another type of burden cells which are usage to mensurate the weight in the industry and these are used for low capacity.

1.5.1-Working Principle

This type of burden cells works on “ the force-balance rule. ”

1.5.2-The Force-Balance Principle

The inertial force produced by a seismal land gesture shifts the mass from its original equilibrium place, and the alteration in place or speed of the mass is so interpreted into an electric signal. This rule is for low scope burden cells.

For long scope burden cells the inertial force is balanced with an electrically generated force so that the mass moves every bit low as possible.

1.5.3-Working of pneumatic burden cell

This sort of burden cells consist of a detection component which is exposed to the site or the vas of which force per unit area or lying unstable weight is to be measured. And in this sort of burden cell the force reassigning medium is air as comparison to the any other fluid in instance of hydraulic burden cell. When force is applied by the lying fluid on the feeling portion of the burden cell it transfers this force to the inside air and so this force is applied on the potentiometer which is placed in the wheat rock p. As the force is applied on the feeling portion of the burden cell the opposition of the variable opposition potentiometer alterations due to this force and therefore the possible equilibrium between the oppositions is disturbed and this shows the magnitude of the applied force on the feeling component by exposing it on the show unit.

Figure pneumatic.gif

Another technique which is largely used in such sort of burden cells is the use of the piezoelectric crystals. In this sort the detection component transportations applied force to the interior fluid ( air ) and it imparts this force on the crystal. And due to the application of the applied force on the crystal by agencies of air its construction gets disturbed and due to disturbance in the construction the possible across the crystal alterations and this alteration in the possible across the crystal is detected by the voltmeter and so this electromotive force is converted into weight of force units and displayed on the exposing unit. Most of the clip wheat rock p is used for this sort of burden cell and there is merely variable resistance largely used while other resistances in the wheat rock p are of fixed opposition.

1.5.4-Advantages

They are largely used on smaller tons when safety and protection are of premier concern.

They are better in truth as comparison to the hydraulic burden cells because there is no alteration in the denseness and truth of the fluid being used for the transportation of applied force.

They besides preferred on the hydraulic burden cells because there is no usage of liquid in these sorts of burden cells.

These types of burden cells are inherently explosion cogent evidence and insensitive to temperature fluctuations.

As they contain no fluids so there is no job of taint of the procedure if the stop gets ruptured.

Pneumatic burden cells are besides used to mensurate little weights that depend upon cleanliness and safety.

1.5.5-Disadvantages

The chief disadvantage of these types of burden cells is that they can non be used for high measure measuring.

Although they are really resistive to the temperature effects but their truth even acquire disturbed at really high temperature.

1.6-Hydraulic Load Cells

This is another type of burden cells which are used to mensurate the magnitude of the applied force and their transition to the electric signals and its digital show.

1.6.1-Working Principle

This type of burden cells besides work on “ the force-balance rule. ”

The difference between the pneumatic burden cell and hydraulic burden cell is merely the transferring medium. In instance of pneumatic burden cell the force reassigning medium is air while in hydraulic burden cells the force reassigning medium is largely liquid or incompressible oil which is besides known as break oil.

1.6.2-Construction and working

Hydraulic burden cell consists of a fluid which act as a force reassigning medium and a piezoelectric crystal which is usage to change over this applied force into possible difference and so there is an agreement for the transition of this possible difference in footings of weight or force per unit area. There is a stop which is usage to feel the force exerted from the external side and the whole shell in which this complete cell is enclosed.

When the force per unit area or weight by the vas or column is applied on the stop of the burden cell it sense that force and so transportations this force to the fluid which is filled in the shell of this burden cell. Then this force is transferred to the piezoelectric crystal by the fluid or oil and this oil transfers this force by the Pascal ‘s jurisprudence. So when the force is transferred by the oil it disturbs the internal construction of the piezoelectric crystal and due to this alteration in the construction of the piezoelectric crystal a possible difference is generated across the piezoelectric crystal. This possible difference is detected by the electric sensor and electric signal is transferred to the show unit to expose the magnitude of the applied force, weight or force per unit area.

Figure Hydraulic.gif

1.6.3-Advantages

These are largely use to happen the weight of the stuff in the storage armored combat vehicles, bin or hopper.

The end product given by these types of burden cells is additive and largely unaffected by the sum of the filling fluid ( oil ) or by its temperature.

If the hydraulic burden cells have been decently installed and calibrated so truth is largely within 0.25 % full graduated table or better and this is acceptable for most procedure weighing applications.

As these types of burden cells have no electrical constituents therefore it is ideal for usage in risky or caustic countries.

For more accurate measuring, the weight of the armored combat vehicle should be obtained by turn uping a burden cell at different points and summing their end products.

1.6.4-Disadvantages

One disadvantage of this type of burden cell is that the elastomeric stop limits the maximal force that can be applied on the Piston or stop to about 1,000 psig.

Electronic pari-mutuel machine is required in instance of acquiring more accurate reading by summing the readings of the single burden cells.

Another disadvantage of hydraulic burden cell is that they are expensive and are complex to utilize.

1.7-Shear Load Cells

This is another type of burden cells which is usage to mensurate the weight or the degree of the column incorporating fluid or some stuff.

1.7.1-Working Principle

This type of burden cells works on the shear of the web. A web of an elastic stuff is inserted at some degree in the vas or storage armored combat vehicle and a shear emphasis exerted by the unstable column stretches the web harmonizing to the burden of the unstable column. Therefore by mensurating the shear emphasis the degree or weight of the fluid in a column can be measured.

1.7.2-Construction and working

This type of burden cells consists of a web and a frame which is movable and the web is fixed with this frame. There are strain gages which are straight connected with this web and step the weight or degree of the column by mensurating the shear emphasis exerted by the liquid nowadays in the column on the web. As the web is inserted in the liquid column, liquid exerts the force on this web as this web is stretchy and elastic so it gets stretched and this stretched province of the web is sensed by the strain gages. This web is inserted in the liquid column perpendicular to the axis of the column. Then strain gages transfers the mensural value of the shear emphasis exerted by the tallness of the liquid column to the electrical transducers which converts it into electrical signals and so transmits it to the show unit to expose the mensural value of the weight in footings of degree or force per unit area as we required.

Figure

1.7.3-Advantages

Shear burden cells are popular now for all type of mediums and for high capacities.

Shear-web is non limited to merely beam constellations.

Shear-web detection component is besides being used by high capacity BSP and USP in a more complex manner.

Shear burden cell engineering is besides being used in rearward transducers.

Shear tonss cells can readily be sealed and can be protected from the environmental effects.

1.7.4-Disadvantages

Shear burden cells have comparatively little sensitiveness at the burden point so can non be used for little graduated table measurings.

Shear burden cells are expensive as comparison to the strain gages.

Shear burden cells are delicate merely because of the web which is really delicate and can be easy damage due to overload fro few minutes even.

Read more

Decision-making Process in Tesco

Table of contents

Decision-making Process in Tesco.

  • A. A description of the broad organizational context, and the particular area being analyzed.

International expansion is one of the key features of many companies’ strategies nowadays, especially when it comes to European-based corporations. Due to the fact that domestic market is usually already divided by the top companies in the industry, it is difficult for many companies to grow without expanding to areas in which there is still plenty of space in the market. The annual growth rate of European-based or American-based supermarkets in the overseas market has been much larger than their growth rate in the domestic market due to its limitations and severe competition. International expansion of companies is usually connected with a large number of challenges which need to be addressed in managerial decision-making, starting with making a general decision of expansion to a particular country and finishing with decisions concerning pricing strategy in the new market. Because of incorrectly made decisions, many companies had to go through significant losses due to their international expansion and were destined to withdraw. At the same time, many companies increased their profits significantly as the result of expansion owing to well-balanced management decisions.

In this paper, the decision-making process of UK’s largest supermarket Tesco is being analyzed. It is widely known that Tesco’s growing power in the retailing market is for the most part determined by its aggressive international strategy. The particular aspect of analysis is decisions made concerning Tesco’s expansion to China, which are currently considered false by some experts. The paper seeks to highlight the strong and weak sides of the management’s decisions using relevant decision-making theories and provide recommendations of how the decision-making process in Tesco could be improved.

  • B. An analysis of the causes of the difficulties, making clear why they can be understood using relevant theories

Successful international strategy chosen by Tesco has ensured its fast growth in comparison with major competitors which currently occupy a larger share of the market. Tesco has expanded to twelve countries. Tesco’s strategy in China can be described in the following ways. After making a decision of expanding to China, the company conducted marketing research for about 3 years and in the end of 2004 entered the market by buying a stake in the existing Chinese hypermarket chain Hymall which is considered small by Chinese standards and is not well-recognized in the country. The decision-making process of the company’s management is represented in chart 1.

Chart 1. Decision-making process in Tesco.

Most of the decisions marked in the chart were correct, except 3 decisions which could turn out crucial in the following years.

In our opinion, the decision about conducting marketing research for 3 years in China was incorrect and the company needed to invade the market much earlier. The reason why the management came up with an incorrect decision is inability to understand the principles of systems theory of decision-making. As the systems theory states, organizations need to respond in an adaptive way to changes in the environment and be prepared for constant changes. Tesco’s management has failed to understand that the environment in China changed all the time since they started conducting the research. Low level of competition in the first year of research changed dramatically in comparison with the end of the third year. When Tesco finally entered the market, there were already many Chinese hypermarkets functioning efficiently, as well as Carrefour and Wal-Mart with well-established positions. Thus Tesco had to face great competition. It could be avoided if Tesco entered the market earlier. Read which statement correctly explains the chart.

However, it is necessary to analyze the way of thinking of Tesco’s managers who came up with this decision. In the interview in 2002, Tesco’s executive David Reid marked that China’s market was still not ripe enough to operate in for Tesco: “Although you can open stores and generate sales in China, there aren’t great examples of people converting that into profit as yet.” (Child 2002: 137).

At the same time, Carrefour and Wal-Mart thought otherwise, and they entered the market much faster than Tesco. As the result, they are already expanding fast in China while Tesco makes only the first steps. Therefore, the decision of conducting research for 3 years can not be considered favourable for the company.

Another decision which needs to be discussed is the decision to choose Hymall as a strategic partner. In this case, Tesco’s management has neglected Anticipatory Management Decision Process Model which is very important for decision-making process. According to Ashley and Morrison (1997: 47), in order to make a correct decision, the company had to identify emerging issues which could result from the decision before making it. Many experts do not consider Hymall as the most suitable partner for Tesco because it is not powerful enough in Chinese retail market to enable Tesco to obtain a large share of the market. “The deal runs against the benchmark Tesco set itself of securing partnerships with the number one or two player in each national market. Hymall is small by Chinese standards.” (Britain’s biggest supermarket Tesco enters China’s mainland. Available at source: english.people.com.cn/200407/14/ eng20040714_149590.html). This step might become fatal for Tesco in China. When choosing partners in China for joint ventures, it is important for foreign companies to decide very carefully because the first step is the most important. A choice of the partner not influential enough in the country can be fatal and the company will never be able to obtain a large market share. In Tesco’s case, this is exactly what has happened. Instead of following its usual well-checked strategy of buying stakes in the top companies in the market, it seems satisfied with the partner which is not widely recognized in the market.

Tesco’s management considers the step which was made very favourable for the company though. According to Tesco’s management, this deal is very favourable for the company because it enables it to establish firm positions owing to the acquisition: “We believe Ting Hsin is the right partner and Hymall is the right store chain for our strategic move into this exciting market.” (Britain’s biggest supermarket Tesco enters China’s mainland. Available at source: english.people.com.cn/200407/14/eng20040714_149590.html).

Despite the optimism of Tesco’s management, it is easy to notice that Hymall is not the best partner for the company. The traditions in China make companies find partners only among influential players in the market. Hymall is small, and thus Tesco can never be considered as a top player in the market. The decision to buy a stake in Hymall cannot be considered well-grounded because it contradicts with traditions in China. Tesco needed to buy a stake in a larger company.

The decision of opening stores under Tesco’s own name can be very negative for the company because Tesco has again neglected Anticipatory Management Decision Process Model. The decision could bring many problems to the company in the first years. The market still does not recognize Tesco as a strong supermarket retailer, and it would be hard for Tesco to attract customers in the very beginning. It is even doubtful whether it would be able to attract customers after a few years because Chinese consumers tend to favour domestic companies. Tesco’s management considers this step favourable for the company because thus it could increase its market share. However, it is very doubtful the company could gain many new consumers by opening stores under the name which many people in China have never heard.

Therefore, opening stores by Tesco’s own name cannot be considered efficient policy for the company. Instead, it needs to look for new partners and buy stakes in them in order to increase a market share.

It is also necessary to stop briefly on the analysis of correctly made decisions by Tesco’s management, such as Tesco’s decision to expand to Asia, and China in particular. Asia is considered one of the most attractive destinations for Tesco’s expansion, while China is the largest market. The markets in Europe already have severe competition and it is extremely hard to obtain an additional share of the market. On the contrary, Asian retail markets are not completely divided by world chains yet, domestic competition in them is relatively weak, and possibilities for winning a large share in them are literally unlimited, in case of correct application of marketing strategies. Besides, the population of Asia is growing much faster than Europe’s population.

Tesco’s decision to buy into an existing supermarket chain in China is very favourable for the company. As the experience of many companies has shown, this is the only suitable method of entering a new market in Asia.

Tesco’s decision to operate large hypermarkets in China versus convenience stores is also correct. For China, hypermarkets can be the best possible option due to various reasons. First, there are currently many hypermarkets in the country, and they are attended by large numbers of consumers. Second, hypermarkets offer larger scale of operations, and Tesco can obtain larger revenues through such operations. Third, Tesco operates quite many hypermarkets in other Asian countries, and this tactics has proved to be very efficient. Therefore, it is possible to conclude that this decision made by the management is going to let Tesco establish firm positions in the market.

In pricing strategy, Tesco has made a correct decision to target consumers which a higher level of income. As the research of consumer market in China provided by Chang et al. (2001: 332) shows, the dominating group in the market is currently “trendy, perfectionistic” customers, who know a lot about products, consider quality very important, and are open for new suggestions. Tesco could be very successful if it developed a strategy to target this group of customers.

Tesco’s decision to launch Tesco.com in China in the following years is correct because currently not too many consumers use online services. Very few people in China have computers at home and even though many have them at work, they do not use them for personal needs. It has been predicted that in a few years China will experience Internet boom, and Tesco’s online shopping services could be very useful and bring many profits to the company.

  • C. Draw conclusions as to the main points raised in your analysis.

As the analysis has shown, there are the following disadvantages of the decisions made:

  1.  large number of foreign competitors in the market due to late expansion;
  2. inability to occupy a large share of the market due to the choice of a relatively small Hymall as a strategic partner;
  3. possible complications in expanding by opening stores under Tesco’s own name;
  4. inability to attract attention to Tesco and gain respect of Chinese consumers due to partnership with a not well-recognized company (Hymall).

The advantages of the decisions which were made by Tesco’s management include the following:

  1. Expansion to China can help Tesco to increase its world market share as China’s retailing market has a large growing potential;
  2. Conducting marketing research prior to expansion helped to learn a lot about future consumers;
  3. Decision to buy into an existing chain is the best method of entrance of market in Asia and thus promises large profits;
  4. Decision to operate in hypermarkets is perfect for China’s consumer market because customers tend to favour hypermarkets;
  5. Targeting consumers with a high level of income is correct because currently income level in China is increasing;
  6.   Launching Tesco.com in China in a few years is going to bring comparative advantage to Tesco because more and more consumers in China are expected to start using online shopping services.

Even though there are many advantages of the decisions made by Tesco’s management, some of the mistakes which were made in the decision-making process can be fatal for the company. Tesco has neglected systems theory of decision-making as well as Anticipatory Management Decision Process Model in some of its decisions, and thus expansion to China can be not as successful as it was expected by the management. Nothing can be done anymore about conducting marketing research in shorter terms, but the company is still capable to provide the search of a more recognized partner in the market. The company might experience certain constraints for the improvements, for example, it might be very difficult to find a new partner in China and obtain additional customers as the result of stake purchase in it. However, Tesco will be very successful if it manages to follow all of the submitted recommendations. In order to make better decisions, the management needs to be more considerate when choosing decision-making approaches. Systems theory is the most efficient for the market in which Tesco seeks to succeed, and all of its principles need to be applied by the management.

  • D. Based on the above offer well supported recommendations as to how the particular aspect of decision-making could be improved.

In order to improve the process of decision-making in the company and make its expansion to China more successful, the management needed (and still needs) to focus on the following issues:

  1. Conducting of marketing research in shorter terms (no more than 1 year). Tesco still needs to expand to different regions of China, and it needs to make sure the expansion is made in shorter terms than it was done with the first entrance to China.
  2. Partnership with a top player in the Chinese market. Tesco needs to find a partner among the top Chinese retailers, for example Shanghai Bailian which is currently the top player in the market.
  3. Winning a large share of the market not due to opening stores under its own name but due to signing agreements with well-recognized supermarket retailers in China. Tesco needs to give up the idea of opening stores under its own name for the next 5 years until its name gets well-recognized in the country.

The improvements suggested in the recommendations have both strengths and weaknesses. The most important strengths include: decision-making based on understanding of the consumer market characteristics in China; application of approaches which were proved to be successful by other companies; decision-making based on the possibilities of the market development in future. The major weakness of the recommendations is the complications in locating a suitable strategic partner for Tesco in China.

References

  1. Ashley William C., Morrison James L. 1997. Anticipatory Management: Tools for Better Decision Making. The Futurist. Volume: 31. Issue: 5. September-October 1997.
  2. Beach, L. R. 1990. Image theory: Decision making in personal and organizational contexts. Chichester, England: Wiley.
  3. Beach, L. R. 1997. The psychology of decision making: People in organizations. Thousand Oaks, CA: Sage.
  4. Beach, L. R., ; Lipshitz, R. 1993. Why classical theory is an inappropriate standard for evaluating and aiding most human decision making. In G. A. Klein, J. Orasanu, R. Calderwood, ; C. E. Zsambok (Eds.),Decision making in action: Models and methods (pp. 21-35). Norwood, NJ: Ablex.
    Britain’s biggest supermarket Tesco enters China’s mainland. Available at source: english.people.com.cn/200407/14/ eng20040714_149590.html. Accessed November 11th, 2005.
    Chang Ludwig M.K., Hiu Alice S.Y., Siu Noel Y.M.,
  5. Wang Charlie C.L. 2001. An Investigation of Decision-Making Styles of Consumers in China. Journal of Consumer Affairs. Volume: 35. Issue: 2.
    Child Peter N. 2002. Taking Tesco Global: David Reid, Deputy Chairman of the United Kingdom’s Largest Grocer, Explains the Company’s International Strategy, The McKinsey Quarterly. Issue: 2.

Read more

Minimally Processed Fruits and Vegetables

Table of contents

Minimal processing is defined to include all unit operations such as washing, sorting, trimming, puling, slicing, coring etc. The purpose of minimal processing is to deliver to the consumer a like fresh with an extended self life whilst ensuring food safety and maintaining sound nutritional and sensory quality i. e. at least 7 days domestic consumption and 7-15 days for overseas consumption. Minimally processed products are also called fresh cuts, semi-processed, ready cut and fresh processed.

This increasing popularity of minimally processed fruits and vegetables has been attributed to the health benefits associated with fresh produce, combining with the opening consumer trend towards eating out and consuming ready to eat foods. The minimally processing industries was initially developed to supply hotels, restaurants, catering services and other institutions more recently it was expanded to include foods retailers for home consumption. Most popular in USA. In 1998 the sale volume is near about $ 6 billion.

Consumer trends are changing and high quality foods with fresh like attribute are demanded. Consequently less extreme treatment and for additives are being required. Within a wider and modern concept of minimal processing some food characteristics are identified that must be attained in response to consumer demands. These are less heat and chilled damaged, fresh appearance and less acid, solt, sugar and fat. To satisfy this demands some changes or reduction in the traditionally used preservation techniques must be achieved. For this reason we are concerned to talk about this topic.

Some minimally processed products

Minimally processed fruits and vegetables are more perishable than fresh as a consequence of tissue damage resulting from processing operation.

Wounding, in fact, leads to an increase in respiration activity and ethylene production rate, alters metabolic activity, reduces shelf-life, increases the rate of nutritional and sensory attributes breakdown and leads to browning of tissues. The greater the degree of processing, the wounding response. Mechanical damages, in addition may enhance susceptibility to decay and contamination by spoilage micro-organisms and microbes pathogenic to consumers. The impact of bruising and wounding can be reduced by cooling the product before processing.

Strict temperature control after processing is also critical in reducing wound induced metabolic activity. Other techniques that substantially reduce damage include use of sharp knives, maintenance, of stringent sanitary conditions and efficient washing and drying of cut surface.

Microbial responses

The increasing demand of these minimally processed products represents for a challenge for researches and processors to make them stable and safe. The increased time and distance between processing and consumption may contribute to higher risks of food borne illnesses.

Although chemical and physical hazards specific to minimally processed and ready-to-eat fruits and vegetables beside mainly with microbial contaminants. Some of the microbial pathogens associated with fresh produced include Listeria monocytogenes, Salmonella sp. , enteropathogenic strains of Escherichia coli, hepatitis A virus, etc. Intact fruits and vegetables are safe to eat partly because the surface of peel is an effective physical and chemical barrier to most organisms. In addition, if the peel is damaged, the acidity of the pulp prevents the growth of organisms (except acid tolerant fungi and acteria). On vegetables, in microflora is dominated by soil organisms. Erwinia and Pseudomonos usually have competitive advantage over other organisms that could potentially be harmful to humans. Changes in environmental conditions surrounding a product can result in significant changes in micro flora.

Risk of pathogenic bacteria increases:

• With film packaging (high relative humidity and low oxygen conditions).

• With packaging of products of low salt content and high cellular pH

• Storage of packaged products at too high temperature.

Microbial growth on minimally processed products can be controlled by

• Sanitation of all equipment and use of chlorinated water are standard approach

• Low temperature during and after processing generally retards microbial growth.

• Moisture increases microbial growth. Removal of wash or cleaning water by centrifugation or other methods are critical.

• Low pH

• Low oxygen and elevated carbon-di-oxide levels, often retards microbial growth.

Read more

Admissions process

This is a provision whereby a prisoner earns a stipulated remission for his sentence each month. For normal prisoners there is a provision of 6 days per month of remission. For prisoners who are involved in some works inside the jail during Sundays, this amounts to 7 days. Also, for some prisoner who shows some leadership in various administrative activities inside the jail, there is provision of 8 days per month of remission. The remission earned are accumulated and recorded in Remissions earned record book quarterly.

CHECK DATE: On completion of two-third of the stipulated sentence the records of a particular prisoner are reviewed and updated for total remission earned. The cumulative remissions earned are then subtracted from the stipulated full sentence and a fresh release date is then allocated. The review date of two-third sentence is called Check-date. A separate register is maintained for the check date.

The total jail stay of the under trial prisoner is adjusted against the total sentence as per the ruling of the court. Accordingly a check date is prepared for them. RELEASE: At the beginning of the month the check date register is reviewed and the coming releases for the month is extracted from it. Accordingly, a release list for the next day is prepared and the respective prisoner is informed. At the release, the prisoner is provided with a release certificate. This certificate contains details of the sentence, duration of the stay and the amount of work done by the prisoner in the jail.

MISSION AND OBJECTIVES OF THE INFORMATION SYSTEM

As indicated before, a considerable amount of time and resources are currently devoted to the Admissions process. The entire process is conducted manually. The implementation of an Information System will serve the objective of facilitating a much smoother and efficient process. This is true not just in the admissions activity, but for an entire end to end Prisoners’ record keeping system. It would avoid redundancies, multiple/incorrect data entries and keep track of the prisoners’ activities within.

It would greatly reduce the number of personnel required to administer the system as compared to the present system. The automation process will also be useful in maintaining an updated health record of the prisoners. Various features incorporated in the Information System would ensure a fast and easy accessibility of data, which would ultimately result in more efficient processes. The organization will have a distinct cost and strategic advantage once the system is implemented.

CONSTRAINTS ON IS DEVELOPMENT

The major constraint in the IS development is in terms of the appointment and training of personnel. Also, being a central government institution, clearances and funding would be required from the government apparatus to implement the project. There would be considerable infrastructural requirements in terms of computers to be purchased and an IS framework to be laid. The current plan is to appoint an external administrator to supervise the system, but in time, in house personnel could be trained to use and manage the system.

EXECUTIVE SUMMARY

This Functional Specification will describe the requirements of Project Jailbird which aims at implementation of a automated Prisoner Management System at Central Jail, Indore. All necessary testing will be conducted, in order to satisfactorily prove the software changes for each of the business requirements which are specified within this functional specification. I have reviewed the functional specification for Project Jailbird. I agree that the specification accurately and completely defines the requirements of this change.

Read more

Evolution of Microprocessor

American University CSIS 550 History of Computing Professor Tim Bergin Technology Research Paper: Microprocessors Beatrice A. Muganda AU ID: 0719604 May 3, 2001 -2- EVOLUTION OF THE MICROPROCESSOR INTRODUCTION The Collegiate Webster dictionary describes microprocessor as a computer processor contained on an integrated-circuit chip. In the mid-seventies, a microprocessor was defined as a central processing unit (CPU) realized on a LSI (large-scale integration) chip, operating at a clock frequency of 1 to 5 MHz and constituting an 8-bit system (Heffer, 1986).

It was a single component having the ability to perform a wide variety of different functions. Because of their relatively low cost and small size, the microprocessors permitted the use of digital computers in many areas where the use of the preceding mainframe—and even minicomputers— would not be practical and affordable (Computer, 1996). Many non-technical people associate microprocessors with only PCs yet there are thousands of appliances that have a microprocessor embedded in them— telephone, dishwasher, microwave, clock radio, etc. In these items, the microprocessor acts primarily as a controller and may not be known to the user.

The Breakthrough in Microprocessors The switching units in computers that were used in the early 1940s were the mechanical relays. These were devices that opened and closed as they did the calculations. Such mechanical relays were used in Zuse’s machines of the 1930s. -3- Come the 1950s, and the vacuum tubes took over. The Atanasoff-Berry Computer (ABC) used vacuum tubes as its switching units rather than relays. The switch from mechanical relay to vacuum tubes was an important technological advance as vacuum tubes could perform calculations considerably faster and more efficient than relay machines.

However, this technological advance was short-lived because the tubes could not be made smaller than they were being made and had to be placed close to each other because they generated heat (Freiberger and Swaine, 1984). Then came the transistor which was acknowledged as a revolutionary development. In “Fire in the Valley”, the authors describe the transistor as a device which was the result of a series of developments in the applications of physics. The transistor changed the computer from a giant electronic brain to a commodity like a TV set.

This innovation was awarded to three scientists: John Bardeen, Walter Brattain, and William Shockley. As a result of the technological breakthrough of transistors, the introduction of minicomputers of the 1960s and the personal computer revolution of the 1970s was made possible. However, researchers did not stop at transistors. They wanted a device that could perform more complex tasks—a device that could integrate a number of transistors into a more complex circuit. Hence, the terminology, integrated circuits or ICs.

Because physically they were tiny chips of silicon, they came to be also referred to as chips. Initially, the demand for ICs was typically the military and aerospace -4- industries which were great users of computers and who were the only industries that could afford computers (Freiberger and Swaine, 1984). Later, Marcian “Ted” Hoff, an engineer at Intel, developed a sophisticated chip. This chip could extract data from its memory and interpret the data as an instruction. The term that evolved to describe such a device was “microprocessor”.

Therefore, the term “microprocessor” first came into use at Intel in 1972 (Noyce, 1981). A microprocessor was nothing more than an extension of the arithmetic and logic IC chips corporating more functions into one chip (Freiberger and Swaine, 1984). Today, the term still refers to an LSI single-chip processor capable of carrying out many of the basic operations of a digital computer. Infact, the microprocessors of the late eighties and early nineties are full-sclae 32-bit and 32-bit address systems, operating at clock cycles of 25 to 50 MHz (Heffer, 1986).

What led to the development of microprocessors? As stated above, microprocessors essentially evolved from mechanical relays to integrated circuits. It is important to illustrate here what aspects of the computing industry led to the development of microprocessors. (1) Digital computer technology In the History of Computing class, we studied, throughout the semester, how the computer industry learned how to make large, complex digital computers capable of processing more data and also how to build and use smaller, less -5- expensive computers.

The digital computer technology had been growing steadily since the late 1940s. (2) Semiconductors Like the digital computer technology, semiconductors had also been growing steadily since the invention of the transistor in the late 1940s. The 1960s saw the integrated circuit develop from just a few transistors to many complicated tasks, all of the same chip. (3) The calculator industry It appears as if this industry grew overnight during the 1970s from the simplest of four-function calculators to very complex programmable scientific and financial machines.

From all this, one idea became obvious—if there was an inexpensive digital computer, there would be no need to keep designing different, specialized integrated circuits. The inexpensive digital computer could simply be reprogrammed to perform whatever was the latest brainstorm, and there would be the new product (Freiberger and Swaine, 1984). The development of microprocessors can be attributed to when, in the early 1970s, digital computers and integrated circuits reached the required levels of capability.

However, the early microprocessor did not meet all the goals: it was too expensive for many applications, especially those in the consumer market, and it -6- could not hold enough information to perform many of the tasks being handled by the minicomputers of that time. How a microprocessor works According to Krutz (1980), a microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things: • Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division.

Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers. • A microprocessor can move data from one memory location to another. A microprocessor can make decisions and jump to a new set of instructions based on those decisions. There may be very sophisticated things that a microprocessor does, but those • are its three basic activities. Put simply, it fetches instructions from memory, interprets (decodes) them, and then executes whatever functions the instructions direct.

For example, if the microprocessor is capable of 256 different operations, there must be 256 different instruction words. When fetched, each instruction word is interpreted differently than any of the other 255. Each type of microprocessor has a unique instruction set (Short, 1987). -7- Archictecture of a microprocessor This is about as simple as a microprocessor gets. It has the following characteristics: • an address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory; • a data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory; • RD (Read) and WR (Write) line to tell the memory whether it wants to set or get the addressed location; • a clock line that lets a clock pulse sequence the processor; and a reset line that resets the program counter to zero (or whatever) and restarts execution. • A typical microprocessor, therefore, consists of: logical components—enable it to function as a programmable logic processor; program counter, stack, and instruction register—provide for the management of a program; the ALU—provide for the manipulation of data; and a decoder & timing and control unit—specify and coordinate the operation of other components.

The connection of the microprocessors to other units—memory and I/O devices—is done with the Address, Data, and control buses. -8- Generation of microprocessors Microprocessors were categorized into five generations: first, second, third, fourth, and fifth generations. Their characteristics are described below: First-generation The microprocessors that were introduced in 1971 to 1972 were referred to as the first generation systems. First-generation microprocessors processed their instructions serially—they fetched the instruction, decoded it, then executed it.

When an instruction was completed, the microprocessor updated the instruction pointer and fetched the next instruction, performing this sequential drill for each instruction in turn. Second generation By the late 1970s (specifically 1973), enough transistors were available on the IC to usher in the second generation of microprocessor sophistication: 16-bit arithmetic and pipelined instruction processing. Motorola’s MC68000 microprocessor, introduced in 1979, is an example. Another example is Intel’s 8080. This generation is defined by overlapped fetch, decode, and execute steps (Computer 1996).

As the first instruction is processed in the execution unit, the second instruction is decoded and the third instruction is fetched. The distinction between the first and second generation devices was primarily the use of newer semiconductor technology to fabricate the chips. This new -9- technology resulted in a five-fold increase in instruction, execution, speed, and higher chip densities. Third generation The third generation, introduced in 1978, was represented by Intel’s 8086 and the Zilog Z8000, which were 16-bit processors with minicomputer-like performance.

The third generation came about as IC transistor counts approached 250,000. Motorola’s MC68020, for example, incorporated an on-chip cache for the first time and the depth of the pipeline increased to five or more stages. This generation of microprocessors was different from the previous ones in that all major workstation manufacturers began developing their own RISC-based microprocessor architectures (Computer, 1996). Fourth generation As the workstation companies converted from commercial microprocessors to in-house designs, microprocessors entered their fourth generation with designs surpassing a million transistors.

Leading-edge microprocessors such as Intel’s 80960CA and Motorola’s 88100 could issue and retire more than one instruction per clock cycle (Computer, 1996). Fifth generation Microprocessors in their fifth generation, employed decoupled super scalar processing, and their design soon surpassed 10 million transistors. In this – 10 – generation, PCs are a low-margin, high-volume-business dominated by a single microprocessor (Computer, 1996). Companies associated with microprocessors

Overall, Intel Corporation dominated the microprocessor area even though other companies like Texas Instruments, Motorola, etc also introduced some microprocessors. Listed below are the microprocessors that each company created. (A) Intel As indicated previously, Intel Corporation dominated the microprocessor technology and is generally acknowledged as the company that introduced the microprocessor successfully into the market. Its first microprocessor was the 4004, in 1971. The 4004 took the integrated circuit one step further by ocating all the components of a computer (CPU, memory and input and output controls) on a minuscule chip. It evolved from a development effort for a calculator chip set. Previously, the IC had had to be manufactured to fit a special purpose, now only one microprocessor could be manufactured and then programmed to meet any number of demands. The 4004 microprocessor was the central component in a four-chip set, called the 4004 Family: 4001 – 2,048-bit ROM, a 4002 – 320-bit RAM, and a 4003 – 10-bit I/O shift register. The 4004 had 46 instructions, using only 2,300 transistors in a 16-pin DIP.

It ran at a clock rate of – 11 – 740kHz (eight clock cycles per CPU cycle of 10. 8 microseconds)—the original goal was 1MHz, to allow it to compute BCD arithmetic as fast (per digit) as a 1960’s era IBM 1620 (Computer, 1996). Following in 1972 was the 4040 which was an enhanced version of the 4004, with an additional 14 instructions, 8K program space, and interrupt abilities (including shadows of the first 8 registers). In the same year, the 8008 was introduced. It had a 14-bit PC. The 8008 was intended as a terminal controller and was quite similar to the 4040.

The 8008 increased the 4004’s word length from four to eight bits, and doubled the volume of information that could be processed (Heath, 1991). In April 1974, 8080, the successor to 8008 was introduced. It was the first device with the speed and power to make the microprocessor an important tool for the designer. It quickly became accepted as the standard 8-bit machine. It was the first Intel microprocessor announced before it was actually available. It represented such an improvement over existing designs that the company wanted to give customers adequate lead time to design the part into new products.

The use of 8080 in personal computers and small business computers was initiated in 1975 by MITS’s Altair microcomputer. A kit selling for $395 enabled many individuals to have computers in their own homes (Computer, 1996). Following closely, in 1976, was 8048, the first 8-bit single-chip microcomputer. It was also designed as a microcontroller rather than a microprocessor—low cost and small size was the main goal. For this reason, data was stored on-chip, while program code was external. The 8048 was eventually replaced by the very popular but bizarre 8051 and 8052 – 12 – (available with on-chip program ROMs).

While the 8048 used 1-byte instructions, the 8051 had a more flexible 2-byte instruction set, eight 8-bit registers plus an accumulator A. Data space was 128 bytes and could be accessed directly or indirectly by a register, plus another 128 above that in the 8052 which could only be accessed indirectly (usually for a stack) (Computer, 1996). In 1978, Intel introduced its high-performance, 16-bit MOS processor—the 8086. This microprocessor offered power, speed, and features far beyond the second-generation machines of the mid-70’s. It is said that the personal computer revolution did not really start until the 8088 processor was created.

This chip became the most ubiquitous in the computer industry when IBM chose it for its first PC (Frieberger and Swaine, 1984 ). In 1982, the 80286 (also known as 286) was next and was the first Intel processor that could run all the software written for its predecessor, the 8088. Many novices were introduced to desktop computing with a “286 machine” and it became the dominant chip of its time. It contained 130,000 transistors. In 1985, the first multi-tasking chip, the 386 (80386) was created. This multitasking ability allowed Windows to do more than one function at a time.

This 32-bit microprocessor was designed for applications requiring high CPU performance. In addition to providing access to the 32-bit world, the 80386 addressed 2 other important issues: it provided system-level support to systems designers, and it was object-code compatible with the entire family of 8086 microprocessors (Computer, 1996 ). The 80386 was made up of 6 functional units: (i) execution unit (ii) segment unit (iii) page unit (iv) decode unit (v) bus unit and (vi) prefetch unit. The 80386 had – 13 – 34 registers divided into such categories as general-purpose registers, debug registers, and test registers.

It had 275,000 transistors (Noyce, 1981). The 486 (80486) generation of chips really advanced the point-and-click revolution. It was also the first chip to offer a built-in math coprocessor, which gave the central processor the ability to do complex math calculations. The 486 had more than a million transistors. In 1993, when Intel lost a bid to trademark the 586, to protect its brand from being copied by other companies, it coined the name Pentium for its next generation of chips and there began the Pentium series—Pentium Classic, Pentium II, III and currently, 4. (B)

Motorola The MC68000 was the first 32-bit microprocessor introduced by Motorola in early 1980s. This was followed by higher levels of functionality on the microprocessor chip in the MC68000 series. For example, MC68020, introduced later, had 3 times as many transistors, was about three times as big, and was significantly faster. Motorola 68000 was one of the second generation systems that was developed in 1973. It was known for its graphics capabilities. The Motorola 88000 (originally named the 78000) is a 32-bit processor, one of the first load-store CPUs based on a Harvard Architecture (Noyce, 1981). C) Digital Equipment Corporation (DEC) – 14 – In March 1974, Digital Equipment Corporation (DEC) announced it would offer a series of microprocessor modules built around the Intel 8008. (D) Texas Instruments (TI) A precursor to these microprocessors was the 16-bit Texas Instruments 1900 microprocessor which was introduced in 1976. The Texas Instruments TMS370 is similar to the 8051, another of TI’s creations. The only difference between the two was the addition of a B accumulator and some 16-bit support. Microprocessors Today

Technology has been changing at a rapid pace. Everyday a new product is made to make life a little easier. The computer plays a major role in the lives of most people. It allows a person to do practically anything. The Internet enables the user to gain more knowledge at a much faster pace compared to researching through books. The portion of the computer that allows it to do more work than a simple computer is the microprocessor. Microprocessor has brought electronics into a new era and caused component manufacturers and end-users to rethink the role of the computer.

What was once a giant machine attended by specialists in a room of its own is now a tiny device conveniently transparent to users of automobile, games, instruments, office equipment, and a large array of other products. – 15 – From their humble beginnings 25 years ago, microprocessors have proliferated into an astounding range of chips, powering devices ranging from telephones to supercomputers (PC Magazine, 1996). Today, microprocessors for personal computers get widespread attention—and have enabled Intel to become the world’s largest semiconductor maker.

In addition, embedded microprocessors are at the heart of a diverse range of devices that have become staples of affluent consumers worldwide. The impact of the microprocessor, however, goes far deeper than new and improved products. It is altering the structure of our society by changing how we gather and use information, how we communicate with one another, and how and where we work. Computer users want fast memory in their PCs, but most do not want to pay a premium for it. Manufacturing of microprocessors Economical manufacturing of microprocessors requires mass production.

Microprocessors are constructed by depositing and removing thin layers of conducting, insulating, and semiconducting materials in hundreds of separate steps. Nearly every layer must be patterned accurately into the shape of transistors and other electronic elements. Usually this is done by photolithography, which projects the pattern of the electronic circuit onto a coating that changes when exposed to light. Because these patterns are smaller than the shortest wavelength of visible light, short wavelength ultraviolet radiation must be used. Microprocessor features 16 – are so small and precise that a single speck of dust can destroy the microprocessor. Microprocessors are made in filtered clean rooms where the air may be a million times cleaner than in a typical home (PC World, 2000)). Performance of microprocessors The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088.

With more transistors, much more powerful multipliers capable of single-cycle speeds become possible ( ). More transistors also allow a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take 5 clock cycles to execute each instruction, there can be 5 instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle (PC World, 2001). Many modern processors have multiple instruction decoders, each with its own pipeline.

This allows multiple instruction streams, which means more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors. The trend in processor design has been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. There has also been a tendency toward special instructions (like the MMX – 17 – instructions) that make certain operations particularly efficient. There has also been the addition of hardware virtual memory support and L1 caching on the processor chip.

All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second! (PC World, 2000) ) With all the different types of Pentium microprocessors, what is the difference? Three basic characteristics stand out: • • • Instruction set: The set of instructions that the microprocessor can execute. bandwidth: The number of bits processed in a single instruction. clock speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.

In addition to bandwidth and clock speed, microprocessors are classified as being either RISC (reduced instruction set computer) or CISC (complex instruction set computer). – 18 – Other uses of microprocessors There are many uses for microprocessors in the world today. Most appliances found around the house are operated by microprocessors. Most modern factories are fully automated—that means that most jobs are done by a computer. Automobiles, trains, subways, planes, and even taxi services require the use of many microprocessors. In short, there are microprocessors everywhere you go. Another common place to find microprocessors is a car.

This is especially applicable to sports cars. There are numerous uses for a microprocessor in cars. First of all, it controls the warning LED signs. Whenever there is a problem, low oil, for example, it has detectors that tell it that the oil is below a certain amount. It then reaches over and starts blinking the LED until the problem is fixed. Another use is in the suspension system. A processor, controls the amount of pressure applied to keep the car leveled. During turns, a processor, slows down the wheels on the inner side of the curb and speeds them up on the outside to keep the speed constant and make a smooth turn.

An interesting story appeared in the New York Times dated April 16 and goes to show that there’s no limit to what microprocessors can do and that resarchers and scientists are not stopping at the current uses of microprocessors. The next time the milk is low in the refrigerator, the grocery store may deliver a new gallon before it is entirely gone. Masahiro Sone, who lives in Raleigh, N. C. , has won a patent for a refrigerator with an inventory processing system that keeps track of what is inside – 19 – and what is about to run out and can ring up the grocery store to order more (NY Times, 2001).

Where is the industry of microprocessors going? Almost immediately after their introduction, microprocessors became the heart of the personal computer. Since then, the improvements have come at an amazing pace. The 4004 ran at 108 kHz—that’s kilohertz, not megahertz—and processed only 4 bits of data at a time. Today’s microprocessors and the computers that run on them are thousands of times faster. Effectively, they’ve come pretty close to fulfilling Moore’s Law (named after Intel cofounder Gordon Moore), which states that the number of transistors on a chip will double every 18 months or so.

Performance has increased at nearly the same rate (PC Magazine, 1998 ). Can the pace continue? Well, nothing can increase forever. But according to Gerry Parker, Intel’s executive vice president in charge of manufacturing, “we are far from the end of the line in terms of microprocessor performance. In fact, we’re constantly seeing new advances in technology, one example being new forms of lithography that let designers position electronic components closer and closer together on their chips. Processors are created now using a 0. 35-micron process.

But next year we’ll see processors created at 0. 25 microns, with 0. 18 and 0. 13 microns to be introduced in the years to come. ” (PC Magainze, 1998) However, it’s not just improvements in lithography and density that can boost performance. Designers can create microprocessors with more layers of metal tying – 20 – together the transistors and other circuit elements. The more layers, the more compact the design. But these ultracompact microprocessors are also harder to manufacture and validate. New chip designs take up less space, resulting in more chips per wafer.

The original Pentium (60/66 MHz) was 294 square millimeters, then it was 164 square millimeters (75/90/100 MHz), and now it’s 91 square millimeters (133- to 200-MHz versions) (PC Magazine, 1998). When will all this end? Interestingly, it may not be the natural limits of technology that will eventually refute Moore’s Law. Instead, it’s more likely to be the cost of each successive generation. Every new level of advancement costs more as making microprocessor development is a hugely capital-intensive business. Currently, a fabrication plant with the capacity to create about 40,000 wafers a month costs some $2 billion.

And the rapid pace of innovations means equipment can become obsolete in just a few years. Still, there are ways of cutting some costs, such as converting from today’s 8-inch silicon wafers to larger, 300-mm (roughly 12inch) wafers, which can produce 2. 3 times as many chips per wafer as those now in use. Moving to 300-mm wafers will cost Intel about $500 million in initial capital. Still, nothing lasts forever. As Parker notes, “the PC industry is built on the assumption that we can get more and more out of the PC with each generation, keep costs in check, and continue adding more value.

We will run out of money before we run out of technology. When we can’t hold costs down anymore, then it will be a different business” (PC Magazine, 1998). At the beginning of last year, the buzz was about PlayStation 2 and the Emotion Engine processor that would run it. Developed by Sony and Toshiba, – 21 – experts predicted the high-tech processor would offer unprecedented gaming power and more importantly, could provide the processing power for the PlayStation 2 to challenge cheap PCs as the entry-level device of choice for home access to the Web.

PlayStation2 is equipped with the 295MHz MIPS-based Emotion engine, Sony’s own CPU designed with Toshiba Corp. , a 147MHz graphics processor that renders 75 million pixels per second, a DVD player, an IEEE 1394 serial connection, and two USB ports. Sony will use DVD discs for game titles and gives consumers the option of using the product for gaming, DVD movie playing and eventually Web surfing (PC World, 2000). Soon, instead of catching up on the news via radio or a newspaper on the way to work, commuters may soon be watching it on a handheld computer or cell phone.

Early January this year, Toshiba America Electronic Components announced its TC35273XB chip. The chip has 12Mb of integrated memory and an encoder and decoder for MPEG-4, an audio-video compression standard. According to Toshiba, the integrated memory is what sets this chip apart from others. With integrated memory, the chip consumes less power, making it a good fit for portable gadgets. This chip is designed to specifically address the issues of battery life which can be very short with portable devices.

The chip will have a RISC processor at its core and running at a clock speed of 70MHz (PC World, 2000). Toshiba anticipates that samples of this chip will be released to manufacturers in the second quarter, and mass production will follow in the third quarter. Shortly after this release, new handheld computers and cell phones using the chip and offering streaming media will be expected (CNET news). – 22 – It is reported in CNET news, that in February this year, IBM started a program to use the Internet to speed custom-chip design, bolstering its unit that makes semiconductors for other companies.

IBM, one of the biggest makers of application-specific chips, would set up a system so that chip designs are placed in a secure environment on the Web, where a customer’s design team and IBM engineers would collaborate on the blueprints and make changes in real time. Designing custom chips, which are used to provide unique features that standard processors don’t offer, requires time-consuming exchanges of details between the clients that provide a basic framework and the IBM employees who do the back-end work. Using the Internet will speed the process and make plans more accurate.

IBM figures that since their customers ask for better turnaround time and better customer satisfaction, this would be one way to tackle this. As a pilot program, this service was to be offered to a set of particular, selected customers initially, and then would include customers who design the so-called system-on-a-chip devices that combine several functions on one chip (CNET news). A new microprocessor unveiled in February 2000 by Japan’s NEC, offers high-capacity performance while only consuming small amounts of power, making it ideal for use in mobile devices.

This prototype could serve as the model for future mobile processors. The MP98 processor contains four microprocessors on the same chip that work together in such a way that they can be switched on and off depending on the job in hand. For example, a single processor can be used to handle easy jobs, such as data entry, through a keypad, while more can be brought – 23 – online as the task demands, with all four working on tasks such as processing video. This gives designers of portable devices the best of both worlds—low power consumption and high capacity (PC World, 2000).

However, it should be noted that the idea of putting several processors together on a single chip is not new as both IBM and Sun Microsystems have developed similar devices. The only difference is that MP98 is the first working example of a “fine grain” device that offers better performance. Commercial products based on this technology are likely to be seen around 2003 (PCWorld, 2000). In PCWorld, it was reported that, last September, a Japanese dentist received U. S. and Japanese patents for a method of planting a microchip into a false tooth.

The one-chip microprocessor embedded in a plate denture can be detected using a radio transmitter-receiver, allowing its owner to be identified. This is useful in senior citizen’s home where all dentures are usually collected from their owners after meals, washed together and returned. In such a case, it is important to identify all the dentures to give back to their correct owners without any mistake (PC World, 2000). In March this year, Advanced Micro Devices (AMD) launched its 1. 3-GHz Athlon processor. Tests on this processor indicated that its speed surpassed Intel’s 1. GHz Pentium 4. The Athlon processor has a 266-MHz front side bus that works with systems that use 266-MHz memory. The price starts from $2,988 (PCWorld, 2001). Intel’s Pentium 4, which was launched in late 2000, is designed to provide blazing speed—especially in handling multimedia content. Dubbed Intel NetBurst – 24 – Micro-architecture, it is designed to speed up applications that send data in bursts, such as screaming media, MP3 playback, and video compression. Even before the dust had settled on NetBurst, Intel released its much awaited 1. GHz Pentium 4 processor on Monday, April 23. The is said to be the company’s highest-performance microprocessor for desktops. Currently priced at $325 in 1,000 unit quantities. The vice president and general manager of Intel was quoted as saying, “the Pentium 4 processor is destined to become the center of the digital world. Whether encoding video and MP3 files, doing financial analysis, or experiencing the latest internet technologies—the Pentium 4 processor is designed to meet the needs of all users” (PC World, 2001).

Gordon Moore, co-founder of Intel, over thirty years ago, announced that the number of transistors that can be placed on a silicon would double every two years. Intel maintains that it has remained true since the release of its first processors, the 4004, in 1971. The competition to determine who has produced the fastest and smallest processor between Intel and AMD continues. Infact, Intel Corp. predicts that PC chips will climb to more than 10GHz from today’s 1GHz standard by the year 2011. However, researchers are paying increasing attention to software.

That’s because new generations of software, especially computing-intensive user interfaces, will call for processors with expanded capabilities and performance.

Read more

Design of Single Mode TE Mode Optical Polarizers

Design of Single manner TE mode optical Polarizer Using Silicon Oxynitide multilayed wave guide

AbstractionA Si oxynitride ( SiON ) guided movie is used as multilayered wave guide and utilizing transportation matrix method.We propose the application of wave guide as a TE-Pass polarizer and TM-Pass polarizer holding a passband in the 3rd optical communicating window of 1550 nanometer. Polarizer is cardinal constituent for devices which require a individual polarisation for their operation. Most of the polarizers use metal clad wave guides with proper thickness and refractile index of screen and substrate.

Index Terms— Optical Polarizer, Multi-layered wave guide, TE manner, Silicon oxynitride

Introduction

Optical wave guide: An optical wave guide is a physical construction that guides electromagnetic moving ridges in the optical spectrum. Common types of optical wave guides include optical fiber and rectangular wave guides.

To manufacture a planing machine wave guide ( Fig.1 ) , normally a movie ( refractile index) , with a screen bed ( refractile index) , is grown on a substrate ( refractile index) such thatSuch wave guides are known as asymmetric wave guides. For symmetric wave guide, the screen and substrate are fabricated with same stuff and the refractile indices are equal, i.e..

If there are more than one bed between Cover and Substrate, so such type of optical wave guides are known as Multilayer Waveguide.

In a multi-layered wave guide, we have pick to manufacture as many beds as we required. We can choose the thickness of the beds and the type of the stuff harmonizing to our demand.

Fig. 1 Geometry of 3-layer wave guide construction

For a N-layer construction, theDefineframe receives the vacuity wavelength, the refractile index valuesns ( substrate ) , n1, … , nN ( interior beds 1 to N ) , nc ( screen ) , and the thicknessest1, … , tNof the interior beds. All dimensions are meant in microns. The figure illustrates the relevant geometry:

Fig.2 Geometry of multilayer wave guide construction

Multilayer wave guides are used in the execution of a assortment of optical devices including semiconducting material optical masers, modulators, wave guide polarisers, Bragg reflectors, and directional couplings.

During the last 20 old ages, many efforts have been made to work out the moving ridge equation for the propagating manners in a general, lossless or lossy multilayer wave guide, in such a manner as to ease the design and optimisation of the above optical devices.

TE-Pass Polarizer

Silicon oxynitride ( SiON ) planar waveguide construction can be fabricated by utilizing plasma enhanced chemical vapor deposition ( PECVD ) . In this technique oxidization reaction is initiated by plasma instead than utilizing external warming beginning. Other techniques are runing technique, vapour stage deposition technique but CVD technique is superior. These wave guides find assorted applications in optical communicating particularly as wavelength filter, microresonator, modulator, polarisation splitter and 2nd harmonic generator.

A SiON guided movie is used as multilayered wave guide and utilizing transportation matrix method we propose the application of wave guide as a TE-Pass polarizer and TM-Pass polarizer holding a passband in the 3rd optical communicating window of 1550 nanometer. Polarizer is cardinal constituent for devices which require a individual polarisation for their operation. Most of the polarizer usage metal clad wave guides with proper thickness and refractile index of screen and substrate.

Multilayer wave guides are used in the execution of a assortment of optical devices including semiconducting material optical masers, modulators, waveguide polarizer, Bragg reflectors, and directional couplings.

We propose a multilayered SiON wave guide fabricated on substrate and has metalas screen is shown in fig 2. The pick of SiON is made for its extremely desirable characteristics such as low interpolation loss, broad scope of refractile index tailoring and realisation of compact devices because of its low bending loss. The present constellation of optical polarizer will happen applications in incorporate optical circuits, signal processing from fiber ocular detectors and fiber gyroscopes. For the analysis of the wave guide we have used the transportation matrix preparation.

Fig.3 Geometry of multilayer wave guide construction

= refractile index of the screen
= refractile index of the movie i=1, 2………r
= refractile index of the substrate
= thickness of the movie bed in micrometer
= thickness of the movie bed in micrometer

Figure 1:Formulation

For the computation of extension invariable and ensuing extension manner profile of multi-layered wave guide, there are following methods: –

1. Disturbance Method ( 4-layer )

2. Newton’s Method

3. Mode-matching method ( 5-layer construction )

4. Transfer Matrix Formulation

5. Argument Principle Method

The disturbance method for a lossless 5-layer construction, for a lossy 4-layer construction, and for a metal-clad wave guide was used to find the extension invariables and the ensuing propagating manner profiles. Newton’s method was used for metal-clad wave guides where the derived function of the scattering equation can be obtained analytically. A graphical method, every bit good as formal electromagnetic analysis methods such as the mode-matching method, was besides used. The disturbance method every bit good as Newton’s method can non easy be extended to multilayer constructions, since their attack is analytic and the expression involved become cumbersome.

None of the above methods can easy foretell the figure of propagating manners supported by the multilayer construction. This is a serious job since there is no manner of cognizing when to halt seeking for new propagating manners or even if the wave guide really can back up any manner at all. In fact, an extra analysis must be used to find the figure of guided manners before using the zero-searching techniques. Even if the figure of bing propagating guided manners is given, there is no verification that all the manners will be found. All the above mentioned methods have serious jobs in turn uping closely spaced roots. Furthermore, all of them need an initial estimate near to the existent nothing. This initial estimation may be hard to happen, particularly for high-loss propagating manners where the popular disturbance method does non use. The method which we are utilizing, is based on complex figure theory. It is capable of happening the nothing or poles of any analytic map in the complex plane. The scattering equation of a general multilayer wave guide is formed via the construct of thin-film transfer-matrix theory. After its uniqueness points are observed, the complex plane is divided into parts where the scattering equation is analytic, and all the zeros inside each part are found. In add-on, the method provides the figure of nothing or poles in each part. The transfer-matrix analysis provides an easy preparation of the multilayer construction job. The method will be presented for TE manners but the extension to TM manners is straightforward.

Fig.4 TE-Pass Polarizer

A multilayer nonmagnetic slab wave guide construction( µ=µO) ,is shown in Fig. 2. The refractile index,,of the IThursdaybed can be complex in general, i.e. ,,where,is the extinction coefficient of the IThursdaybed and I = 1.. . ..randRis the layer figure. For aTelluriummanner propagating in the+way in the IThursdaybed, (tenI? x ? xi+1) , the electric field is,and the magnetic field in the same bed iswhereare the unit vectors in theten, Y, omegaway, severally,is the radian frequence, andis the complex extension invariable withandthe stage and the fading invariables severally

2.1

TE Mode

A multilayer nonmagnetic slab wave guide construction( µ=µO) ,is shown in Fig. 3. The refractile index,,of the IThursdaybed can be complex in general, i.e. ,,where,is the extinction coefficient of the IThursdaybed and I = 1.. . ..randRis the layer figure. For aTelluriummanner propagating in the+way in the IThursdaybed, (tenI? x ? xi+1) , the electric field is,and the magnetic field in the same bed iswhereare the unit vectors in theten, Y, omegaway, severally,is the radian frequence, andis the complex extension invariable withandthe stage and the fading invariables severally

By utilizing Maxwell’s differential equations, we get

For TE manner,

= 0, merelyconstituents will show.

So by work outing above two Maxwell’s equations, we get
( 1 )
( 2 )
( 3 )
( 6 )

whereis the freespace permittivity,and, c is the velocity of the visible radiation in the freespace andis the freespace wavelength. The electric and magnetic The Electric and Magnetic digressive Fieldss within the IThursdaybed are solutions of above equation, and can be written as
= AI+ BI( 7 a )
=j( 7 B )

When we apply boundary status at=in equations ( 7 a ) and ( 7 B ) , so we will acquire

( 8 )

=cos []+( 10 a )
( 10 B )

Adding equation ( 10 a ) and ( 10 B )

( 11 )

Using the continuity of the digressive Fieldss at any layer interface in the multilayer construction, the Fieldss digressive to the boundaries at the top of the substrate bed,and at the underside of the screen bed,, are related via the matrix merchandise

=[( 12 )

Where

for one = 1,2……….. , R( 13 )

Are the transportation matrices for all of theRbeds holding thickness. For propagating manners, the digressive Fieldss at the boundaries must be exponentially disintegrating holding the signifier
( 14 )

And

( 15 )

Where,

From equation ( 12 ) , we get

The extinction ratio ( PER ) is defined as the ratio of power staying ( at the end product terminal ) in themanner () to the power staying ( at the end product terminal ) in themanner () , expressed in dBs. In add-on, the interpolation loss ( PIL ) is defined as the power loss associated with themanner. Frankincense:

PER= 10

PER=Loss in dubnium?Loss in dubnium

PIL= 10()

PIL=Loss in dubnium

The above equations assume that the inputmanner has unit power at the input terminal of the polarizer. In order to hold a good TE-pass polarizer, we require the power staying in the desiredmanner at the end product terminal of the polarizer to be every bit high as possible. Hence a low value of PIL is desirable. The effectivity of the polarizer in know aparting against the transition of themode comparative to themanner is measured by the PER parametric quantity. Therefore, this parametric quantity should be every bit high as possible. Hence, we require a high PER and at the same time a low PIL.

Figure 5.1: Effective index w.r.t.normalized movie Figure 5.2: Loss w.r.t.normalized movie bed

Fig. 5.3: Effective index w.r.t.normalized movie bed Fig. 5.4: Loss w.r.t.normalized movie bed

Decision

First of wholly, we have checked the map of TE manner by utilizing transportation matrix method [ 2 ] . The value of stage changeless and fading invariable for 6-layer Lossy Dielectric Waveguide are available. The available informations were calculated by the method of Argument Principle ( APM ) .

Transportation Matrix method has been used to analyze a four superimposed waveguide dwelling of SiON as guiding movie. On this footing, we have designed TE base on balls polarizer. The scope of SiON movie thickness was estimated so that merely the cardinal pervert TE0is supported. The computations showed that in the thickness scope of 0.7µm -2.2 µm of SiON, the wave guide supports merely TE0manner.

In TE manner base on balls polarizer, the loss of TE manner is in the scope of 0.2 – 2.5 dB/cm and for TM mode its scope is 40 – 45 dB/cm, which rather higher in comparing to TE manner. So in this type of constellation of four bed wave guide, merely TE manner will go through.

Mentions

[ 1 ] Vishnu Priye, Bishnu P.Pal, and K.Thyagarajan, “ Analysis and Design of a Novel Leaky YIG Film Guided Wave Optical Isolator, ”J. Lightwave Technol. , vol. 16, No.2, February 1998

[ 2 ] Anemogiannis and E.N.Glytis, “Multilayer waveguides: Efficient numerical analysis of general constructions, ”J. Lightwave Technol. , vol. 10, pp. 1344-1351, 1992

[ 3 ] M.Ajmal Khan and Hussain A. Jamid, “ TE/TM Pass Guided Wave Optical Polarizer” , IEEETEM2003

[ 4 ] H.Kogelnik, Theory of Optical Waveguides in Guided-wave Optoelectronics, T. Tamir, Ed. New York: Springer-verlag, 1988

[ 5 ] AJOY K. GHATAK, K. THYAGARAJAN, AND M. R. SHENOY Numerical Analysis of Planar Optical Waveguides Using Matrix Approach

[ 6 ] Ajoy Ghatak and K.Thyagarajan, “ Optical Electronics’’ , Cambridge University Press

[ 7 ] Joseph A Edminister and Vishnu Priye, “Electromagnetics Schaum’s Outline, Tata MacGraw Education Private Limited”

Read more

Measuring and Managing Process Performance

Table of contents

Takeaway sad chapters management accounting information for activity and process decisions

After reading this chapter, you will be able to:

1) define sunk costs and explain why sunk costs are not relevant.

2) analyze make-or-buy decisions.

3) demonstrate the Influence of qualitative factors In making decisions.

4) compare the different types of faculties layouts.

5) expelled the theory of constraints

6)demonstrate the value of Just- In-time manufacturing systems,

7)describe the concept of the cost of quality. ) calculate the cost savings resulting from reductions

In Inventories,reduction In reduction cycle time,production yields improvement, and reductions in rework and defect rates.

Short Case For 50 years, the Tabor Toy Company had been producing high-quality plastic toys for children. In early 2006,Tabor experienced a large drop In sales and market share, After some Investigations,TLS loss was attributed to a significant decreases In the quality of the product and to general delays In getting It to customers.

After several weeks tot study,Don and a cross-functional team tot management personnel documented and numerous shop floor problems. Don Pipeline,senior managers report to top management raised several questions.

  1. Should many of the existing machines,including the major unaccommodating machine must be replaced.
  2. What should the company do about the local vendor who produced the faulty computer chips?
  3. WSDL It make sense to Implement an entirely new production process such as SIT?

This chapter presents three types of facility designs

l) Process layouts

2) Product layouts,and manufacturing-all it which can be used to help organizations reduce costs. We follow this with a discussion of how organizations can reduce costs by ensuring that they focus on improving the quality of their processes. Finally the SIT manufacturing system is presented as a system that integrates many of the ideas we discuss in the chapter. 1 valuation Financial Implications(p. 208) Managers must evaluate the financial implications tot decisions that require trade- offs between the costs and the benefits of different alternatives. Equally important,they must recognize that some costs and revenues are not relevant in such evaluations.

Assuming responsibility for decisions

On a technical level,the correct decision for Bonnier Company is to dispose of the machine and replace it;however,because they are concerned about their reputations within their own organizations,not all managers would do so. 8 3 Make or Buy Decisions Management accountants often supply information about relevant costs and revenues to help managers make special one-time decisions. One example is a make- or-buy decisions.

As managers attempt to reduce costs and increase the competitiveness of their products, they face decisions about whether their companies should manufacture some parts and components for their products in- house or subcontract with another company to supply these parts and components. Exhibit 5-3 displays details of the two lowest equates from outside suppliers for a representative lamp in each of the four product lines manufactured in-house. The should accept the outside bid and terminate the in-house production of these product?

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp