Information Privacy and Personal Data Collection

Introduction

In today’s informational world, the issue of information privacy is a growing concern. The fact that nowadays, most information is kept in electronic form leads to an increased risk of information leak, which might cause significant harm to the involved parties.

Topic

The topic of this paper is information privacy. If information privacy is maintained, it means that the relationship between data collection, retention, and sharing preserves the rights of persons or parties that the information is about.

Background

The issues of information privacy arise in many spheres of contemporary life. Individuals are often asked or required to submit their personal data to various organizations, health care institutions, online services such as social networks, businesses, law-enforcing agencies, etc. Many of these are often willing to share that information with their partner organizations or businesses. People and organizations are often at risk of having their private data transferred to third persons or parties and utilized by them for their own benefits, often at the cost of the individual or party whose data was used.

Problem

As it is possible to see, the issue of keeping personal information private arises. Individuals are afraid that this data might be used against them, and there is a number of ways this could happen. For instance, the leak of information to financial institutions and businesses might lead to people being denied a loan or insurance, fired from their work, or turned away from a prospective job. The possibilities of public embarrassment or discrimination also pose a problem. Besides, the data can be used directly to harm a person or an organization; such crimes as fraud or identity theft can lead to rather serious consequences (Koontz, 2015).

Research questions

What are the main sources from which the personal data can be collected in a way that breaches information privacy, and what are the ways to avoid this?

Methods

This paper is general qualitative research, based on a literature review. To collect the data, an online search was performed in order to find materials available on online electronic databases such as “ProQuest” and “Science Direct.” As a result, a number of articles related to the topic of information privacy were found. These articles were carefully studied, analyzed, and compared.

Gathered Data and its Analysis

Information privacy “is of growing concern to multiple stakeholders including business leaders, privacy activists, scholars, and individual consumers”; it is “one of the largest concerns for the consumers” (Smith, Dinev, & Xu, 2011, p. 990). There are a number of ways in which private information might be leaked and then misused. The data can be taken from electronic medical records, social networks, and other online sources, government records, records collected by businesses, etc.

Medical records

It is clear that hospitals and other medical institutions gather data about their patients and store it for medical use. Needless to say, should there be a leak, such information can be used against the current and former patients. Therefore, it is an important responsibility of healthcare institutions’ records managers to keep their records safe (Jones, 2014, p. 23). Besides, patients have a number of rights concerning their medical records. For instance, they can demand a copy of their medical record; have the right to know how the information is used and with whom it is shared; can request that the information holder does not share the information with third parties (though the holder is allowed to refuse because of healthcare considerations); can request that the information is not shared with insurance companies (the holder is obliged to comply) (Koontz, 2015).

Social media and online privacy

Social networks and online services (such as banking services) often require their users to provide some personal data, and, according to Smith et al. (2011), the users often submit this data, even though they usually state that they do not intend to do so (p. 1000). Social media, though, often share the information about their users with their business partners and other companies that wish to buy that information; clearly, the users are not informed about it explicitly (Camenisch, 2012). The accounts can also be hacked. Even though the data is often dispersed among various holders, parties that wish to gather information can do so by looking for overlapping pieces of data and gathering the missing parts, thus being able to create a rather detailed record (Camenisch, 2012, p. 3838). Therefore, users should not disclose any important information about themselves, and if they do, they ought to use strong passwords (ones that combine a few different unrelated numbers, names, or events; ideally – passwords that do not make sense at all), and delete the information as soon as it stops being necessary. It is also important to have a cautious attitude towards websites, especially social networks.1

Government records

The government authorities collect, store, utilize, and spread information about their citizens in order to be able to successfully perform their functions (Li, 2011). This might cause a conflict with citizens’ privacy rights and concerns. It is important that government organizations make sure that the information is accurately collected and properly stored and that no unauthorized third parties are able to gain access to this data in order to comply with the privacy rights of the citizens (Carter & McBride, 2010).

Data records of businesses

Information privacy concerns are of major importance to enterprises, for they fear that their rivals might get access to internal data and use it to gain an unfair advantage in business. Information breaches are expensive for companies (Gable, 2014). Besides, not only rivals but also other parties might hack organizations’ records, uncovering important data about the enterprises’ employees and clients. There exist a number of instructions and principles for companies that they need to follow to keep their records safe2 And these instructions should be followed.

Discussion

As it is possible to see, there are many potential sources of information leakage. There exist a set of laws and regulations aimed at protecting this data, as well as a number of methods to ensure its safety; nonetheless, breaches occur in any case. Therefore, it is clear that such breaches bring substantial profit to those who perform them. This fact always leaves each person with a hard choice whether to provide their personal information to any other parties. In any case, it is clear that data safety regulations must be strictly complied with, for the consequences of an information leak might be severe.

Conclusion

To conclude, it should be noted that there are many sources from which personal data can be collected; these include medical and government records, records of businesses, and social media and online services. It is crucial that organizations comply with information safety recommendations, whereas individuals need to be careful while exposing their personal information and keep an eye on how it is used. It is also important to study the ways in which information privacy can be maintained in more detail and implement those ways in both personal and organizational activities.

References

Camenisch, J. (2012). Information privacy?! Computer Networks, 56(18), 3834-3848. Web.

Carter, L., & McBride, A. (2010). Information privacy concerns and e-government: A research agenda. Transforming Government: People, Process and Policy, 4(1), 10-13. Web.

Gable, J. (2014). Principles for protecting information privacy. Information Management, 48(5), 38-40, 42, 47. Web.

Jones, V. A. (2014). Protecting information privacy per U.S. federal law. Information Management, 48(2), 18-20, 22-23, 47. Web.

Koontz, L. (2015). Health information privacy in a changing landscape. Generations, 39(1), 97-104. Web.

Li, H. (2011). The conflict and balance between government’s information right and citizen’s privacy right. Journal of Politics and Law, 4(2), 104-107. Web.

Smith, H. J., Dinev, T., & Xu, H. (2011). Information privacy research: An interdisciplinary review. MIS Quarterly, 35(4), 989-1015, A1-A27.

Footnotes

  1. Indeed, social networks seem to be created for socializing, but in truth, they are constantly being used as a source of information by various third parties.
  2. these instructions often include recommendations on data collection, storage, utilization, retention, and disposal, as well as regulations on its disclosure, monitoring, and security (Gable, 2014).
Read more

Midfield Terminal: Abu Dhabi Airports Project

Project management process

There are different processes involved in project management. They include project planning, organizing the processes of the project, and controlling the resources therein. Project planning management aims at connecting available resources to achieve the stipulated goals. Midfield Terminal Baggage system is one of the largest international projects to have botched in such a momentous way. Looking at the project’s management procedures, one can effortlessly figure out the causes of its failure.

Some of the identified causative features of the project’s malfunction include underestimation of its obscurity. The project was ended due to the high cost of maintenance. The cost of maintaining Midfield Terminal’s Baggage System was $1M and this was beyond the cost of using a manual system [1]. Unlike local projects, global projects are operationally very complex. Another factor that led to the failure of the Midfield Terminal Baggage System is the complex architecture involved. Global projects such as this require a lot of manpower and professional skills.

The complex nature of this piece of architecture was a major challenge and so changed the requirements of the project [1]. Global projects are characterized by complex architecture because they are mostly big and high budget projects. A slight change in the requirements or a delay in the scheduled timeline may result in very high costs. The Midfield Terminal Baggage System project’s delay led to an additional $560M to the estimated cost of the entire project [1]. This project is a perfect example of a failed global project and it gives a clear picture of the cost implications.

The recommended strategies, practices, tools, and techniques

In managing global projects, one has to appreciate the fact that such projects are complex. To be effective in dealing with such complexities, one has to develop the following strategies:

Start small

To enhance collaboration between team players, it is important to first establish small dispersed projects before the launch of the project. Global projects use different teams from different sites hence creating small dispersed projects that create a platform for interaction before the actual projects are launched.

Use Rigorous Project Management and Seasoned Project, Leaders

For a global project to be successful, a strong project management team must be appointed [1]. This team will be responsible for the daily running of the project hence developing discipline, adherence to the laid structure, as well as creating a sense of purpose among the team players [1].

Appoint a Lead Site

As noted earlier, global projects are collaborations of various sites. With the numerous sites involved, it is imperative to have a lead site that acts as the main point of reference or headquarters. In a global project, all sights cannot carry the same weight [1]. A lead site is therefore responsible for delivering the project in the scheduled timeline and the estimated budget [1]. The main reason for this is because every site will have its perception of the project and goals may differ. One lead site harmonizes the project’s goals and makes it easier to achieve them.

Invest Time Defining the Innovation

Unlike in a single location project, global projects require definition on the onset. This is because such projects are split over different time zones, cultures, and languages [1]. The definition of product architecture, individual functioning modules, as well as interdependency modules is very crucial in global project management [1].

Reference

S. Gazzar. “Midfield Terminal at Abu Dhabi airport on track to open in 2017.” Web.

Read more

Measuring the Quality of Airport Cities

Executive Summary

Airport cities are new phenomena that emerge in rapidly developing metropolitan areas and become independent sources of employment, entertainment, business, and financial capital. No current framework for the assessment of their quality exists, and airport cities are perceived as a set of districts rather than a solid business unit. The identified gap in research is linked to the assessment of airport cities through a quality model, as little data is provided about their relation as units to business excellence.

Airport cities are being reviewed not as business units with a high impact on excellence and profitability but as public spaces or new forms of metropolitan centres that redefine the concept of suburbs and cities from a new perspective. Such perception of airport cities leads to their defragmentation, where each of the sections of the unit is perceived separately from the rest, which interferes with the manager’s ability to evaluate their quality and impact on different business dimensions discussed below.

The inability to perceive airport cities in their entirety leads to a lack of research about their function. Leadership, processes, human resources, and partnerships of each district are observed at the local level, with no regard to their overall influence on the airport or the city. Thus, airport cities are perceived as parts of transport hubs rather than particular units that impact the aviation business on a greater level and directly influence the competitiveness in this area. Such perception can lead to a misinterpretation of data related to quality management of airports.

The research aims to investigate the EFQM Excellence Model as a suitable framework for the standardized assessment of the quality of airport cities. The EFQM Excellence Model is a framework that evaluates different dimensions of business (e.g., leadership, people, customers, employees, partnerships, and resources, etc.) and their impact on overall business effectiveness and excellence.

The purpose of the study is to evaluate whether the EFQM Excellence Model can be used as a basis for creating standardized pillars of measuring airport cities’ quality. The suggested hypotheses are:

  • H1: The EFQM is a suitable framework for evaluating quality of airport cities.
  • H2: The EFQM can become the basis for developing standardized pillars for quality measurement of airport cities.

Methods of analysis include a qualitative review of secondary data and content analysis. The author intends to use existing research on airport cities and their quality assessment in order to examine whether the EFQM Excellence Model is a suitable framework for measuring the quality of these units. Additionally, the author also aims to address the lack of information in current research on the assessment of airport cities within any quality models.

The current approach toward the measurement of airport cities’ quality relies on their division into different districts (logistics, education, accommodation, etc.), whereas their overall effectiveness and quality are not addressed. The advantage of secondary data analysis is that it will not lead to redundant research and relies on data obtained from previous researches. The limitation of this method is its inability to generate new, unique data that might provide additional insights into the problem. Nevertheless, secondary data and content analysis will help the author to not only address current gaps in research but also suggest a new solution to the identified issue.

Introduction and Research Background

Airport cities are relatively new phenomena that are currently emerging in different urban centers and near airline hubs. Due to their novelty, no framework exists to assess their quality and impact on the business excellence of an organisation. Current research addresses airport cities as a unique phenomenon with no regard to the assessment of their effectiveness, and the research about the application of existing frameworks to airport cities is scarce. Airport cities in the UK and UAE are rarely addressed in current research as well.

Problem Definition

The expansion of commercial aviation locally and globally leads not only to the need for the development of the airport’s infrastructure but also highly increased competitiveness in the aviation industry. The emergence of airport cities is possible due to several factors: airports can be airline hubs, business centres, cargo gateways, or everything at once. The workforce needed for maintenance of such facilities can include thousands of people, which eventually results in the emergence of airport cities that include a variety of districts (aviation, logistics, education, residential, etc.).

However, no standard framework exists to evaluate the quality of airport cities and their influence on business excellence and effectiveness. Therefore, it is impossible to evaluate the effectiveness of such airport cities without a standardized quality model. The EFQM Excellence Model is a framework that allows analysing the progression degree of an organisation in terms of specific dimensions (e.g., customers, employees, etc.). It can be viewed as a suitable framework for the assessment of the airport cities’ quality.

Purpose of the Study

The purpose of the study is to analyse the EFQM Excellence Model as a potentially suitable framework for the assessment of the quality of airport cities. As no assessment standards of such cities currently exist, the study aims to examine the phenomenon of airport cities and suggest the EFQM as a potentially effective framework for the measurement of airport cities’ quality as a unit and not as a set of divided districts.

Research Questions and Research Objectives

The research questions are as follows:

  1. How can the EFQM Excellence Model be integrated into the assessment of airport cities’ quality?
  2. Is the EFQM Excellence Model a framework suitable for assessing airport cities as a unit and not a set of districts?
  3. Can pillars of measuring airport cities’ quality be created on the basis of the EFQM?

Research objectives are as follows:

  1. Examine the phenomenon of airport cities and critically assess current research on it.
  2. Evaluate whether the EFQM is a suitable model for the assessment of airport cities’ quality.
  3. Analyse how the EFQM can be applied to assess airport cities.

Literature Review

The search for suitable sources for the literature review was conducted using Google Scholar; the keywords “airport cities”, “airport cities quality”, “EFQM”, and “airport city” were used. Studies older than five years were excluded from the search, as well as studies that focused on topics not directly related to the assessment of airport cities’ quality (e.g., airport branding, transportation services and social media, etc.).

Appold and Kasarda (2015) provide extensive research on the development of airport cities as the new downtowns, and emphasize their impact on the metropolitan area. Airports are rearranging space and becoming the city. The hypothesis discussed by the authors indicates that “businesses dependent upon air transport may increasingly prefer locations near air interchanges” (Appold & Kasarda, 2015, p. 1255).

A second hypothesis proposed by the authors suggests that airport cities become work and entertainment zones, which directly impacts the effectiveness of business related to tourism and air services. Saldıraner (2014) discusses requirements necessary for the establishment of an efficient airport city: common planning approach, sufficient area for the airport and airport city, the availability of business and finance, convention, logistics, shopping centers, hotels, and recreation and accommodation areas, and multimodal transportation modes. Rizzo (2014) also points out the importance of available transportation that will connect town centers with the airport city by orbital, transit corridors.

Nikolaeva (2012) argues that the unique nature of an airport city (in this case, Schipol airport) is in its ability to provide a 24/7 available public space that does not aim to be a substitute for a city center but redefines the definition of “cityness” per se. However, the quality assessment of airport cities is lacking. The majority of research focuses on it as a new phenomenon with no regard to its impact on business effectiveness and excellence.

Sections of the airport city are evaluated separately and not as a whole. The EFQM model’s “key implementation factors cover people, processes, structures, and resources that the organization can use to manage quality” (Suárez, Roldán & Calvo-Mora 2014, p. 866). It is suggested that the EFQM can be effective in measuring the processes that undergo in airport cities such as leadership, human resources, strategies, products and services, etc. to assess their current level of excellence and set further goals.

  • H1: The EFQM is a suitable framework for evaluating airport cities’ quality.
  • H2: The EFQM can become the basis for developing standardized pillars for airport cities’ quality measurement.

Research Methodology and Design

The research design is a qualitative study that will rely on secondary data analysis to answer research questions. The researcher’s goal is to utilize existing research data from journal articles, books, available surveys, and interviews to evaluate how the EFQM is used for businesses quality assessment, how airport cities’ quality is assessed by quality managers, and whether any standards can be developed on the basis of the EFQM.

The researcher aims to use content analysis to determine how airport cities’ impact on business effectiveness is discussed in the research and what gaps it does not address. With the help of content analysis, the researcher will be able to identify corresponding issues and themes in secondary data and examine how it correlates with the research questions and hypotheses (e.g., how the lack of standardized assessment influences cities’ effectiveness).

The advantage of secondary data analysis is that it “avoids repetition of research & wastage of resources by detailed exploration of existing research data” (Tripathy 2013, p. 1478). However, it cannot provide any unique materials or data obtained directly from the source of investigation (for example, via questionnaires and interviews).

Reporting, Timing, and Budget

The research will be divided into a series of steps, provided in the following table 1:

Table 1: Research Timing.

Data Collection October-November
Data Analysis October
Research Preparation October-November
Research Development December
Research Review December-January
Rough Draft Preparation February

As the research does not require any financial investments, it is anticipated that no budgeting is necessary for this project. However, additional interviews with individuals related to research (such as quality managers) may impact the timing of said research and postpone development and review in in the event of a shift in the interview schedule.

Reference List

Appold, SJ & Kasarda, JD 2013, ‘The airport city phenomenon: evidence from large US airports’, Urban Studies, vol. 50, no. 6, pp. 1239-1259.

Nikolaeva, A 2012, ‘Designing public space for mobility: contestation, negotiation and experiment at Amsterdam Airport Schiphol’, Tijdschrift Voor Economische en Sociale Geografie, vol. 103, no. 5, pp. 542-554.

Rizzo, A 2014, ‘Rapid urban development and national master planning in Arab Gulf countries. Qatar as a case study’, Cities, vol. 39, no. 3, pp. 50-57.

Saldıraner, Y 2014, ‘The new airport in Istanbul: expectations and opportunities’, Journal of Case Research in Business and Economics, vol. 5, no. 1, pp. 1-15.

Suárez, E, Roldán, JL & Calvo-Mora, A 2014, ‘A structural analysis of the EFQM model: an assessment of the mediating role of process management’, Journal of Business Economics and Management, vol. 15, no. 5, pp. 862-885.

Tripathy, JP 2013, ‘Secondary data analysis: ethical issues and challenges’, Iranian Journal of Public Health, vol. 42, no. 12, pp. 1478-1479.

Read more

Technology Impacts on New Generation of Children

Abstract

Technology has become part and parcel of life today. Thus, it has a significant influence on the way society lives. An examination of technology trends and the new generation of children indicate that technology is influencing the kids’ life tremendously. However, the divide between developed and developing countries extends to the technological exposures of the kids. Technology enhances a child’s imagination.

The new generation of children in the United States can come up with innovative ideas and objects due to exposure to technology. Technology enables kids in both the United States and Kenya to socialize and make new friends. Besides, it supports personalized learning. The education system should embrace the use of technology as a learning tool in a measured way since it is dynamic.

Introduction

Technology has become an essential element of civilization in the current world due to its spread and application. Almost every facet of life depends on technology because it is affected in one way or the other. Technology derives its importance due to the way it simplifies barely everything in people’s life. It simplifies tasks by providing advanced ways of performing them and solving problems. Technology is bound to influence children in one way or the other as they are part of society.

The aforementioned will happen on purpose or by default, depending on the prevailing situation. Technology will not only shape the present generation of children but also the coming generations. The technological influence will define how they learn and interact with society. A look at the previous generations indicates that whereas they enjoyed a certain level of technological advancements, they cannot be compared with the present crop of generation. Technology will impact children from developed and developing nations differently based on the degree of exposure. This paper will discuss how technology will shape the new generation of children in Kenya and the United States.

Technology and the New Generation

The new generation of children can be described as the K-12 cohort. The K-12 cohort is a generation that has been brought up under the digital era that runs modern society. According to Klopfer, Osterwel, Groff, and Haas (2009), technology has a significant influence on a child’s learning and cognitive development (p. 6). Young children who grow up under technological influence tend to develop a sharp mind.

They easily understand and interpret situations as they happen. Technical devices equip children with the ability to connect situations by processing them quickly. The use of computers by children will enable future generations to have a good understanding of technological tools and how they work to enhance people’s life (Palak & Walls, 2014, p. 432). A simple understanding of how technological instruments work is adequate to advance a child’s knowledge.

It will lead to a generation of children with a somewhat advanced way of looking at things because of their superior understanding. Due to the availability of technological tools, the new generations of children will have a better understanding of the world around them. Technology will help children to gather information about different forces that impact the universe (Boyden, Dercon, & Singh, 2015, p. 195). Technology is about new ways of doing things.

Thus, the new generation will have the ability to create innovative ways to experiment, novel methods to explore, and original means to create different things. Indeed, technology will shape the new generation by advancing every facet of its life. Nevertheless, the influence of technology will vary between developed and developing nations.

Effects on Social Interactions

Technology has an enormous influence on the social interaction of society. The impacts of technology will be reflected in future generations because it will provide a platform for social interaction. The environment in which the children grow affects their socialization. The environment is a wholesome component that influences the growth of a child and defines how a kid relates to others (Plowman & Mcpake, 2013, p. 28).

The technological environment provides a new way of socialization that is a departure from what used to happen before the advent of technology. In the United States, technology gives children a significant opportunity to socialize. Kerawalla and Crook (2012) allege, “Technology such as video games allows children to hang out and provides structured time with friends” (p. 763). Besides, video games help children to initiate discussions that enable them to make friends. As Kenya continues to adopt technology, kids are gradually changing the way they socialize.

Nevertheless, the Kenya kids socialize virtually unlike the American children who have time to meet face-to-face with their friends. Many kids spend a lot of time on social media. They use social media to interact with friends (Hyo-Jeong & Bosung, 2011, p. 109). It becomes hard for the parents to know the kind of company that their children keep. The kids no longer prefer outdoor activities that enabled them to meet and interact with friends in the past. In other words, one may claim that while technology continues to enhance socialization in the United States, it is gradually destroying the same in Kenya.

Technology and Education

Learning in educational institutions is slowly adopting technological methods to impart knowledge to students. The development of new methodological ways of imparting knowledge to the new generation necessitates the adoption of technology. The generation will enjoy new ways of learning through interactive technology-oriented tools. The technology-teaching relationship is important for education purposes because it aligns the child with the expectations of society.

In the United States, children are exposed to computer games at an early age. The games require them to think hard. As a result, American children can tackle arithmetic at a tender age. Besides, American kids are imaginative. Simon and Halford (2015, p. 65) maintain that most kids in the United States do not have problems in expressing themselves. Playing video games on computers equips them with skills to anticipate consequences and make sound decisions.

On the other hand, children in Kenya do not have access to computers. Consequently, they have challenges in learning (Powell, Diamond, Burchinal, & Koehler, 2010, p. 304). Lack of exposure to technology makes it hard for Kenyan kids to be creative (Simon & Halford, 2015, p. 65). Besides, children lack the desire to discover new things. Most American kids have an interest in learning history. Technology exposes them to a myriad of information regarding the past, thus triggering their desire to learn (Wrzus, Hänel, Wagner, & Neyer, 2013, p. 60).

According to Couse and Chen (2014, p. 81), the Kenyan schools are unable to provide individualized learning to kids due to lack of exposure to technology. Students are taught the same regardless of their differences in understanding. On the other hand, Americans use technology to provide individualized learning. Video games are set in a way that they help kids to tackle difficult challenges gradually. Consequently, in the future, the United States will have a generation of learned individuals with the ability to tackle varied challenges creatively. On the other hand, Kenya will comprise a generation of semi-skilled individuals who will have difficulties in handling challenging situations.

Technology and Innovation

The new generation of children will largely be over-dependent on technology as part of its life. Growing up in a technology-infested environment creates a dependency syndrome. It leads to the assumption that the world has always been with technology. Such is a false assumption because it can mislead a generation. Growing up with technology means that the new generation will become innovative or lazy in the future (Plowman & Mcpake, 2013, p. 28).

The majority of technology users are consumers who do not have an idea about its origin. Therefore, they might not be in a position to appreciate and understand the lack of technology. Kids become innovative if they are exposed to technology at an early age. American kids are exposed to numerous technological games that spur innovation. One of the games is Disruptus. Sternberg and Preiss (2013) describe Disruptus as “an innovative-thinking exercise practiced by firing off ideas based on images of objects and a directive determined by the roll of the die” (p. 37). The kids acquire innovative skills enabling them to make numerous objects. In Kenya, kids are not exposed to innovative games. Nevertheless, access to technology broadens their mid enabling them to come up with innovative ways of dealing with challenges that they encounter.

Talent Development

Not all children are gifted in education. However, exposing students to technology at an early age may enable them to discover their talents. In the United States, kids are likely to discover and mold their talents without difficulties due to access to technology. In Kenya, kids are only exposed to digital learning. They are not allowed to use technology for other purposes, particularly when in school.

On the other hand, American children have an opportunity to use technology to exercise their creativity (Boyden, Dercon, & Singh, 2015, p. 195). In some instances, the children discover their talents and nurture them. For instance, some kids learn to compose music at an early age due to exposure to technology. In the case of Kenya, the kids may take long to discover their talents as they use technology for education purposes only. The only other way that they use technology is to socialize and that does not facilitate talent development.

Conclusion

Child development and learning continue to transform with the emergence of new technologies. When children are exposed to technology, they learn and grasp the new idea, thus acquiring a better understanding of concepts. A study of children who have come into contact with technology and those who have not shows that the former has a good understanding of things. They tend to make quick connections of ideas. Exposure to technology has helped American kids become creative. However, this does not mean that Kenyan children are entirely disadvantaged. The cognitive ability of children allows them to continue to learn.

Hence, when kids are exposed to new ideas, they tend to comprehend them fast. Kenyan kids will eventually adapt to technology as they continue to interact with it. The advent of technology is forcing many education systems to incorporate it into their practices. In Kenya, traditional teaching methods are gradually fading out. Institutions are slowly adopting new technologically-inspired teaching methods. It will enable the new generation of kids to develop technical skills and become innovative. However, as much as technology is here to stay, it will take time before it is entrenched in the children’s lives, particularly in Kenya. The kids will require sufficient time to adapt to the use of technology as it is new to them.

References

Boyden, J., Dercon, S., & Singh, A. (2015). Child development in a changing world: Risks and opportunities. The World Bank Research Observer, 30(2), 193-219.

Couse, L., & Chen, D. (2014). A tablet computer for young children? Exploring its viability for early childhood education. Journal of Research on Technology in Education, 43(1), 75-96.

Hyo-Jeong, S., & Bosung, K. (2011). Learning about problem based learning: Student teachers integrating technology, pedagogy and content knowledge. Australasian Journal of Educational Technology, 25(1), 101-116.

Kerawalla, L., & Crook, C. (2012). Children’s computer use at home and at school: Context and continuity. British Educational Research Journal, 28(6), 751-771.

Klopfer, E., Osterwel, S., Groff, J., & Haas, J. (2009). Using technology of today in the classroom today. Massachusetts: Massachusetts Institute of Technology.

Palak, D., & Walls, R. (2014). Teachers’ beliefs and technology practices: A mixed-methods approach. Journal of Research on Technology in Education, 41(4), 417-441.

Plowman, L., & McPake, J. (2013). Seven myths about young children and technology. Childhood Education, 89(1), 27-33.

Powell, D., Diamond, K., Burchinal, M., & Koehler, M. (2010). Effects of an early literacy professional development intervention on head start teachers and children. Journal of Educational Psychology, 102(2), 299-312.

Simon, T., & Halford, G. (2015). Developing cognitive competence: new approaches to process modeling. New York: Psychology Press.

Sternberg, R., & Preiss, D. (2013). Intelligence and technology: The impact of tools on the nature and development of human abilities. New York: Routledge.

Wrzus, C., Hänel, M., Wagner, J., & Neyer, F. (2013). Social network changes and life events across the life span: A meta-analysis. Psychological Bulletin, 139(1), 53-80.

Read more

The Lead Company’s Truforce HRM Software

Due to the ever rising number of employees at the Lead Company, there has been the need to develop a system that can assist the company’s management to effectively monitor the workforce. Fortunately, the human resource department has been able to develop a system that it feels will help the company carry out the task of monitoring employees in a more cost-effective way (Allen & Terry 2005, p. 181).

The system that the HR department has developed is called Truforce HRM Software. This system has the ability to optimize the activities of the HR, ranging from rewarding talented and hardworking employees, tracking the personnel to the general management of the workforce. The software is designed in such a way that it will assist the company to reduce the cost it normally incurs in the HR activities (Waddill & Marquardt 2011, p. 106).

Truforce HRM has four main sections. The ‘interactive dashboard system’, which provides a space in which the information from any part of the software, can be collected and be controlled fast. The second section is the ‘employee manager’. This section enables the HR department to centralize all the issues relating to the employees. That is, the section provides an opportunity to view all the details of the employees in a more centralized manner. The ‘core reporting’ within the software is responsible for analyzing the data of the employees. Lastly, the ‘employment center’ ensures that the company copes with the growing number of the employees (Allen & Terry 2005, p. 184).

Advantages of Truforce HRM Software

Truforce HRM software will enable the company to adjust in many areas especially those that affect the human resource department. Firstly, there will be an improvement in the hiring processes. It is a fact that the number of people who apply for jobs at the IT Company has dramatically increased. As a result, the HR staff has become overwhelmed with perusing the files manually to assess the eligibility of the applications. For that reason, the department will be using the software to stream line the hiring process. This will be possible since the system can be used to create advanced qualifications of the job, which will then be displayed directly on the website of the company (Waddill & Marquardt 2011, p. 109).

Through the software, the HR department will be at the best position to track and maintain the attendance of the employees. As a matter of fact, monitoring the attendance of the employees within the company has always been a great challenge. The software has ‘attendance manager’ section, which the HR personnel will use to record and control the way the employees spend their work time (Allen & Terry 2005, p. 185).

The ‘task and calendar management’ is a section within the software, which is useful when it comes to remembering and managing important dates of the company. Through task management, the HR personnel can easily assign work to one or more employees. The section also enables the personnel to monitor and evaluate the progress of the task, which has been assigned to a given employee. On the other hand, the calendar management will enable the HR staff to organize the company’s important dates in a more centralized manner. In addition, this section also has the ability to attach reminder to the events considered important to the company (Peng 2011, p. 463).

The employees of Lead Company have in the recent past been involving themselves in accidents while at work. Unfortunately, most of these accidents normally go unreported. There are also other incidents in the company, which go unreported to the management. The records of these incidents may be important in situations such as defending an employee or the company in legal issues. Through this software, the management will be able to monitor everything that goes on in the company. For that reason, none of the events will ever go unnoticed. This will be done through the ‘incident manager’ section (Allen & Terry 2005, p. 187).

Weakness of the Software and the Recommended Improvements

Even though the software will come with numerous benefits, it has a few limitations. Firstly, the system is too complicated to operate. The system has many sections, which make it difficult for ordinary HR staff members to operate it. This implies that the company will have to part with a lot of money to train the members who are not well conversant with the system. The fact that the system monitors everything that the employees do while at work is likely to interfere with employees’ privacy. This will in turn demoralize even the hardworking employees (Peng 2011, p. 466).

It is recommended that some improvements be made to the software before it is eventually incorporated into the company’s management system. Some of the sections especially those that have a lot of importance will have to be deleted from the system. For instance, ‘benefit manager’ and ‘calendar management’ sections can be removed and the software will still remain effective. The removal will make the system less complicated (Waddill & Marquardt 2011, p. 110).

References

Allen, S & Terry, E 2005, Beginning relational data modeling [a practical, step-by-step guide to data modeling or all IT professionals], Apress, Berkeley, CA.

Peng, MW 2011, Global business, Cengage South Western, Mason, OH.

Waddill, DD & Marquardt, MJ 2011, The e-HR advantage: the complete handbook for technology-enabled human resources, Nicholas Brealey Pub, Boston, MA.

Read more

Smartphones Industry

Introduction

The world is constantly changing on the socio-economic, technological, and cultural fronts. The effects of these rapid changes frequently manifest themselves in the business world in the form of hard competition and dynamic business environments. Markets, products, technology, and competitive conditions are rapidly changing. Therefore, all organizations should constantly develop effective marketing strategies to enable them to adapt to these changes and sustainably deliver goods and services of value to their customers.

Slow economic growth and international crises (as witnessed in the years 2000-2009) are some of the factors that force the organizations to change. Advertising, which is one of the marketing tools, has not been left behind. Advertising is now not just selling products alone but also selling information about products (Kotler, 2006).

Advancements in Information Communications Technology (ICT) have brought forth the emergence of smartphones hence promoting internet advertising. Most companies now advertise online using, for example, Google.com, Yahoo.com, Amazon.com, Alibaba.com, You tube.com, and affiliate marketing, which consists of banner advertisements, pay per click, pay per view, and pay per call advertising. In addition to this, interactive advertising, blog, or article-based advertising are also popular. For companies to keep up with the shifts in communication trends, there is a need to embrace ICT and make use of new media as much as possible. This will allow consumers to take part in market conversations in which products and services are reviewed, ranked, and evaluated based on consumer experience (Kotler, 2006).

In the current world technology, Smartphones have dominated the market and are expected to become one of the information centers and entertainment devices preferred by the majority worldwide. The technology used by the phone has attracted so much attention hence making the idea lucrative to the phone manufacture industry as well as the wireless industry. The rate of investments in shares of wireless technology companies and wireless service providers has proved considerably profitable within the current market.

For example, companies involved in making models such as AlphaProfit and Focus benefit a greater percentage of market share gain in Fidelity Select Wireless, concentrating investments in the wireless industry, which provides ample space for technological growth. The use of wireless technology has also created many business opportunities both in developed and emerging markets. The quality of the wireless industry continues to receive a high growth rate owing to improvement in current smartphone capabilities, making it possible for the usage of third generation wireless networks (Kotler, 2006).

Smartphone Features

Smartphones have got improved qualities compared to ordinary mobile phones. Their functions are in-built and capable of operating digitally. Smartphones are packaged with lots of features, making them capable of performing diversified functions such as entertainment and also acting as an information center at the same instant. Smartphones incorporate several feature operations such as web browsing, e-mail, and multimedia capabilities.

Currently, there are smartphone models with horsepower capable of running integrated software applications such as enterprise customer relationship software; it is also used in the automobile industry for the purposes of running navigation programs. They are identified with a fully equipped QWERTY kind of keyboard, provision of instant messaging, inclusive music players, and other important features that are normally identified in those sold in higher-end markets (Kotler, 2006).

The smartphone industry is experiencing rapid growth, especially the handset segment of the market. In the year 2004, smartphone sales rose to over seventeen million units representing 3% of the global sales, which was then over 650 million handsets. This is according to research conducted by Strategy Analytics. Predictions reveal an increase in demand for smart phones within the subsequent years. The level of demand was approximately 125 million units of the total handset sales globally. The growth represented 48% in the annual global smartphone market demand (Subramanian, 2005).Global Demand

Regional Demands

Research shows that Asia and Europe continents are the fastest in the adoption of smartphones technology. This is attributed to the fact that these regions have embraced the distribution of advanced wireless networks encouraging wide use of smartphones. Other regions, such as the Asia Pacific region, account for approximately37% of global smartphone market demand (Subramanian, 2005). Countries within these regions, such as South Korea and Japan, record a higher rate of smartphone usage. Whereas the European market accounts for about 27% of the total global demand for a smartphone, with demand in North America standing at 25%. However, the market projection predicts more demand from Europe than in Asia in the near future (Subramanian, 2005).

Smartphone Manufacturers

There several companies within the regions involved in the manufacture of a smartphone. One of the company’s, Nokia (NYSE: NOK), is considered globally as the dominant smartphone manufacturer. Others, like the Finnish company, currently command more than half of the global market share. Nokia unleashed new smartphones with unique features, i.e., Nokia 7710, which has a wider screen, full internet browser, upgraded music player, integrated in-built camera, and FM receiver radio (Subramanian, 2005). The Nokia 7710 smartphone has got the ability to post pictures and text directly to the websites using a feature known as Moblog.

There is also a Nokia 3230 smartphone, which is more up-graded since it also has a video recorder as well as a movie director. Such as technological advancement has made Nokia differentiate itself from the competitors within the mobile phone industry. This led to the signing of deals with media companies such as micro media and Real Networks, the technology is moving towards manufacturing handsets with wireless television feeds (Subramanian, 2005).

Business Opportunities

Several business opportunities could be realized with the emergence of smart phones. This is since the adoption rate within different regions is skyrocketing hence improving the smart phone industry. There are several other manufacturers coming up with new features in the smart phone, such companies include palmOne and Research in motion which recently introduced the GSM model of smart phone referred to as Treo 650 and BlackBerry 7100 series. The unique additional features displayed by the phone making it double as portable information centre as well as entertainment device of choice has made the market crazy.

The phone has also provided business opportunities to wireless service provider like Telecomm, Vodafone hence able to double their revenue in the recent past. It also provides fertile market to Telecommunication equipment makers. This asserts that the emergence of smart phones will encourage the implementation of 3G wireless networks for the benefit of wireless network equipment suppliers such as Ericsson and smart phone semiconductor chip manufacturers (Subramanian, 2005).

Internet advertising utilizes the power of electronic commerce to sell the market products. Electronic commerce refers to any market on the internet. Electronic commerce supports selling, buying, trading of products or services over the internet. Internet advertising forms a subset of electronic commerce. With growth in the internet, it is not just selling products alone, but in addition to this, information about products, advertising space, software programs, auctions, stock trading and matchmaking. There are newer marketing techniques being invented all the time. It is important to know how the trend would be.

Companies are inventing new techniques to find better ways to make revenue and establish their brand on the internet. Consumers are becoming more and smarter. They do not want to be a party to the internet advertising campaigns made by companies unless they get some incentive in doing so. They would be quite keen in participating in campaigns provided they are compensated in someway by the companies.

There are usually 2 or 3 parties involved in internet advertising. It is companies and end users or companies, internet marketing companies and end users. If it is a two party model then companies themselves directly get revenue from the end users. If it is a three party model then internet marketing service providers acts as intermediate revenue providers for companies (Subramanian, 2005).

One would wonder what would drive firms to pursue internet marketing effectively through phones and what size of firms would be interested in internet marketing. A lot of existing work in this area has been done that provides a valuable information in their publication regarding the factors that would drive companies to adopt to internet marketing. Their study states that different factors drive companies of different sizes to pursue internet marketing.

The drivers are willingn to cannibalize, entrepreneurial drivers, management support, and market pressure (Kotler, 2006). In their work, compares and contrasts between company’s motivation to choose between internet channels and traditional channels. They suggest web would be a serious alternative to traditional advertising and proper pricing by internet companies is what attracts the consumer. According to Christopher (2000), the type of supply chain that companies should choose depends on three factors, namely product variety, product variability and demanded volume.

The author sustains that supply chains are to be agile in unpredictable environments, characterized by a volatile demand and high variability of product demand (as is the case of personal computers). Others measure agile capabilities in the supply chain, and more recently, Kotler (2006), presented a model for assessing supply chain flexibility. There are some aspects where a lean approach is very important, in particular where demand is predictable and the requirement for variety is low and volume is high conditions in which Toyota develops the lean philosophy.

The supply chain within the smart phone industry experiences some common problems referred to as bullwhip effect. Even small fluctuations in demand or inventory levels of the final company in the chain are propagated and enlarged throughout the chain. Because each company in the chain has incomplete information about the needs of others, it has to respond with a disproportional increase in inventory levels and consequently an even larger fluctuation in its demand relative to others down the chain (Forrester, 1961). Several authors including Forrester (1961), have shown that the production peak can be significantly reduced by transmitting the information directly from the customer to the manufacturer.

Another problem is that the companies often tend to optimize their own performance, in so doing disregarding the benefits of the SC as a whole (local instead of global optimization). The maximum efficiency of each chain does not, however, necessarily lead to global optimization (Forrester, 1961; Kotler, 2006). In addition, human factors should also be taken into consideration: decision-makers at various points along the SC do not usually make perfect decisions (due to the lack of information or their personal hindrances); their decisions are also influenced by employee reward systems (Forrester, 1961; Kotler, 2006). Regardless of the number of difficulties and problems in SCM, the core concept of successful SCM is efficient information transfer/information sharing.

Emergence of the smart phone industry has led to several adjustments within the market including advertising models which are now changing from what was known as ‘interruptive marketing’, done through radio and TV. These ads were a source of interruption to programs, and also print ads interrupted reading processes amongst others.

However, engagement marketing promoted interactivity and dialogue between brands and audience, and was much more geared towards customer satisfaction, and used media which correlates with specific demographics’ behavioral patterns. Smart phones through their internet provision have allowed this by providing an interactive area which allows the chats, emails and even computer games. Marketers now have to track communication trends and establish positive relationships between customers and their brands (Kotler, 2006).

According to Michelle 2009, on the Case of Developing Countries involves assessment on the impact of ICTs (which includes the Internet, mobile phone, pager, personal computer, and telephone) on Gross National Income (GNI) per capita in developing countries in 2005. This study suggested that an increase in the adoption of smart phone, personal computer, and fixed-line telephone by one percent brings about an increase in average income per person in lower-middle-income and low-income developing countries by approximately 2.8%, 4.1%, and 6.3% respectively.

The absence of easily accessible Internet advertising may have brought about the absence of a critical mass in Internet adoption and usage. To maximize sales, smart phone companies pay premium for wide exposure through the mass media. The smart phones provide advertising space which is common, but not restricted to the realms of billboards, public transportation, movies, schools, clothing, even bathroom stalls carry ads and the industry is constantly finding new ways to advertise. The advertising process is influenced by the commoditization of products and blurring of consumer’s own perceptions of the companies’ offering.

In order to differentiate and position their products and/or services, today’s businesses employ advertising which is sometimes considered not only of bad taste, but also as deliberately intrusive and manipulative. Marketers within the industry are therefore advised to understand their “responsibility for the emerging portrait of future society” (Christopher, 2000). One of the main challenges facing the manufacture of smart phone is the general avoidance of creating “a happy customer in the short term”, because “in the long run both consumer and society may suffer as a direct result of the marketer’s actions in ‘satisfying’ the consumer” (Kotler, 2006).

Case: BlackBerry

Introduction

BlackBerry-RIM and other Phone Companies have made it possible for the consumer to access to internet. The internet can now be accessed almost anywhere through cheaper means, especially through BlackBerry’s mobile Internet devices. Mobile phones, data cards, handheld game consoles and cellular routers allow users to connect to the Internet from anywhere there is a wireless network supporting that device’s technology. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, services of the Internet, including email and the web, may be available.

Service providers may restrict the services offered and wireless data transmission charges may be significantly higher than other access methods. In terms of advertising, the internet is the first mass medium that allows for two-way mass communication (Kotler, 2006). Whereas traditional advertising (For example radio, TV, newspapers, billboards) has often been based on mass marketing and push strategies in which fixed one-way messages have been targeted at potential consumers, internet advertising becomes increasingly based on non-push or pull-strategies (Kotler, 2006) and allows consumers to take part in market conversations.

Internet Application

The BlackBerry phones have many advantages and possess many comfortable services and functions that are so easy to use. One of the main function of such phones is attracting customers from all over the world by using the sources of the world international communication such as Internet. Advanced technology phones has made the practice of marketing products or service by means of promoting them on the Internet possible and accessible. With advancements in ICT, the global advertising industry has been tending towards creativity. With the outsourcing of the media buying function to the media agencies throughout the 1980’s and 1990’s, the advertising industry has been nurturing creativity as its core product (Pratt 2006).

The focus on creativity may also be seen as a response to advertisers’ wish to realign consumer skepticism towards traditional advertising through a stronger focus on creative, innovative and entertaining advertising campaigns (Leslie 1997). One may thus say that the core product of the advertising industry is creativity. The first ICT wave revolved around the convergent nature of new media (such as Internet and mobile telephony) through its ability to combine still and moving images, text, voice and music (Pratt 2006). However, this convergence was a part of a traditional one-way mass communication media paradigm. With the rise of web 2.0 and social media, internet advertising and market communication become more oriented towards a dialogue with the consumer.

Though most companies still advertise through traditional media-TV, radio, newspapers and outdoor media, internet advertising is now gaining popularity. The United Kingdom advertising association 2004, reports that in 2003 internet advertising expenditure has grown by 61.6% as compared to that of newspapers and radio which rose by 48.7% and 6.8% respectively (Egan, 2007). Common forms of internet advertising include: websites, banner adverts, pop-ups, spam e mails, e mail promotions, sponsored links (like goggle ads), and videos distributed on line such as you tube, MySpace, or through email. Internet advertising is also done through online social networks such as Skype, yahoo chats, face book, twitter and blogs. The convergence of the internet and mobile phones has also led to advertising through SMS, MMS, and mobile

Telecommunication network is a worldwide network of computer links, which connects hundreds of thousands of individual networks all over the world. This giant network of networks has become the primary infrastructure for both electronic commerce and electronic business (Egan, 2007). The internet can be accessed through an Internet Service Provider (ISP) commercial organization with a permanent connection to the internet that sells temporary connections to subscribers.

ICT is understood as a set of different tools that are commonly used by people with the purposes of searching for different information, analyzing this or tat matter, communication with each other, sharing point of views and thoughts with each other, mainly with the help of the social services that have been worked out especially with the purpose of contacting people from the different parts of the world. They are mediums that utilize “both telecommunication and computer technologies to transmit information, as hand- held devices like mobile phones” (Egan, 2007).

Internet can be regarded as a key technology, and as a general-purpose technology (GPT), that can be applied across the entire economy. Being a pervasive technology, the Internet represents not only a new industry itself, but new potential preconditions for social interaction, economic transactions and new approaches to production, marketing and Internet is often used by consumers to conduct pre-purchase information search (Kotler, 2006).

This search phase can both include becoming familiar with possible products on display, prices, and recommendations based on past purchases, but nonetheless also looking for good offers, being hit by or engaging with online advertising or by looking for other people’s experience with given products or services. Word of mouth (WOM) is traditionally seen as a powerful way to persuade consumers. On the Internet, the word of mouth (WOM) in the form of face-to-face conversation among consumers about their experiences with products and services become replaced by e-WOM, which is less personal but more ubiquitous.

Despite e-WOM being, impersonal research shows that online reviews have an effect on the decision making of the consumer. It is found that negative reviews have an effect on consumer decisions regarding utilitarian products, whereas positive reviews have an effect on consumer decisions regarding hedonic products.

SWOT Analysis

Strengths

The BlackBerry-maker RIM has incorporated several features on their phone that add the strength to the product. These include its unique look and feel accompanied by a mobile operating system. The phone has got sensors which work with the physical QWERTY keyboard which is currently one of the unique technologies within the industry. These new features are presented to a large and loyal user base that the company has accrued over the years.

Its marketing has been given a support from all over the internet, saving the company millions of shillings as advertising fees. Finally, the fact that the BlackBerry Company is amongst the first to deliver in this arena of smart phones is one of its greatest strengths. BlackBerry Enterprise Server (BES) makes it easier to push new emails out to other phones (Subramanian, 2005).

Weaknesses

BlackBerry likes any other product but within the market has a set of weaknesses. The BlackBerry is not a 3G device and will not work some of the technologically advanced regions such as Japan and Korea. All BlackBerry phones are 3G compliant. Some of its features do not impress at all such as the sub-par camera which is known to have the capacity of approximately 2 megapixels and having an in-built memory.

The prices seem moderately high and research reveals that 52% of consumers are happy with their current mobile device since BlackBerry is designed only for the high-end consumers. The BlackBerry-RIM choice of distribution channel has appeared quite challenging since it can only serve limited number of consumers. The phone’s ability to be used in corporate duties is not also satisfactory especially in the area of sending mails (Subramanian, 2005).

Opportunities

There is a demand within the market for better mobile computing experience which could be used to quicken the internet processes. The BlackBerry accomplishes this by incorporating both powerful computing as well as entertainment technology within one system. There is also the possibility of adopting desktop capabilities into the BlackBerry smart phone. The company has made a crucial step of providing an Internet Protocol-based network through their smart phones. The emergence of Wi-Fi networks has brought to the market the issue of having the visitor fees instead of provider lock in users. There is a possibility of eliminating monthly subscription fee through pay-per view system (Subramanian, 2005).

Threats

The majority of threats within the phone industry come from established companies such as Nokia, Sony, and Google with their respective products. Smart phones are one popular example that stands to compete against the BlackBerry. The fact that other companies manufacture phones run on the 3G network puts BlackBerry behind in terms of speed race. The economic down-turn presents a potential threat on the marketability of BlackBerry since consumers are currently very cautious about their spending their money (Subramanian, 2005).

Design of BlackBerry

The BlackBerry’s functionality is accessed through its QWERTY keyboard. It makes it possible to use finger commands hence enabling free navigation through the phones features. The phone has got larger capacity resolution per inch making the quality of its videos and photos very appealing. Its screen visibility is enhanced through an ambient light sensor which automatically adjusts the BlackBerry’s display brightness as well as saving power. The phone has got an audio standard headphone jack and a Bluetooth stereo transmission (Subramanian, 2005).

Features

Contacts within the BlackBerry phone are automatically updated with other associated networked devices as well as voicemail being accessible through email type list selection. BlackBerry’s in-built advanced features comprises of conference calling, a speakerphone, text and multimedia messaging and its proximity sensor is capable of detecting when being used and afterwards the device immediately turns off any display for the purposes of conserving power (Subramanian, 2005).

Wireless Internet Communication Device

The BlackBerry uses internet through Wi-Fi enabled Internet device that utilizes various available browsers to access tools such as Internet email, web sites, online maps, and search engines. The phone’s web capabilities offer a rich HTML email client with imbedded images which could be used automatically with a Mac or a PC. BlackBerry provides Google Maps directions, free push Yahoo email message forwarding and other internet widget applications connectivity such as Java which provides updated information on stock quotes, sports scores, weather reports, traffic conditions and other services. Other programs supported also include Bluetooth, and GSM (Subramanian, 2005).

PDA, Computer and Camera

The BlackBerry phone has got the ability to provide PDA features such as appointment calendars, contact lists, photos, emails and documents from the keyboard options. Soon the market is expected to receive new inventions from the developers having specific applications. The BlackBerry runs multitude innovative features capable of providing powerful applications in the future, especially as the world turns increasingly towards smaller mobile devices for a computing platform.

The BlackBerry’s in-built camera takes pictures at 2 MB resolution that can be stored in more than 2 GB flash memory cards or shared through other consumers phones. The phone has got an internal accelerometer with the capability of detecting the movement of the phone hence automatically changes the contents display appropriately. The video storage capabilities are excellent owing to availability of cheaper flash storage devices (Subramanian, 2005).

Problem

Modern telecommunications rely on modern technology and one of the most important elements of that technology is the computer applied technology. The computer has made it possible to send electronic messages anytime anywhere. This new technology has also changed the methods by which information is manipulated and stored; the way people do business, work, play, live and think. An example where ICT has powered advertisements through the use of phones is digital signage networks where adverts are served across different regions and managed via networks.

Improvising the use of ICT through phones has created totally new and innovative ways of reaching new and massive numbers of potential clients than ever envisioned by any other media form. Online search companies like Google with their ad programs-Ad words (Paid advertising solution for companies selling online), Ad sense (for web publishers to display ad content on their websites for a commission), social network advertising like Face book where ads can be targeted according to geography, interests and education levels of intended clients. The telecommunication industry is growing fast with players in the industry competing for market leadership with the highest numbers of account holders which has spurred the rise in advertisements by industry players.

Studies have been carried out in the past on the state of advertising in African countries. Research was carried out on “The state of advertising practices by private hospitals in Nairobi”. He found that Trade journals and newspapers occupied the most common advertising media at 41% and the least popular medium is internet with 12.8% of hospitals using it. The last five years have been characterized by the rapid advancements in ICT and these calls for research to assess the extent to which internet advertising is now used through the phones (Kotler, 2006).

Research done on ICT policy and ways of improving the existing policies in third world countries discovered that the national ICT policy recognizes the current ICT infrastructure as poor and in need of improvement. The last five years have been characterized by rapid advancements in ICT and this call for research to assess the extent to which internet advertising is now used. The study also assessed the relationship between flexible work practices and organizational performances in a survey of advertising agencies in developing countries. The findings call for reinforcement on the relationship between various dimensions of flexible work practices and organizational performance (Kotler, 2006).

Conclusion

The recommendations which can be utilized here indicate a knowledge gap on the extent to which the internet is used in advertising. This lays a basis for such a study since none of the major studies carried out reveals much on the use of internet advertising through smart phones in third world countries. This study therefore seeks to fill the knowledge gap and look at the in-depth analysis of the role the phone industry plays through the internet in advertising and how effective it is especially in the rapidly growing telecommunications industry that is experiencing tension in pricing mechanisms and the recent number portability wars among the players.

References

Christopher, M. (2000). The agile Supply Chain – Competing in Volatile Markets. Industrial Marketing Management, 1 (29), 37-44.

Converse T., Park J. & Morgan, C. (2006). PHP5 and MySQL Bible. New Delhi: Wiley.

Egan, J. (2007). Marketing Communications. Thomson Learning: London.

Forrester, J (1961). Industrial Dynamics. Cambridge: MIT Press.

Kotler, P. (2006). Marketing Management-Analysis, Planning, Implementation & Control. London: Prentice Hall.

Michelle, W. L. F. (2009). Issues in Informing Science and Information Technology. Victoria University: Melbourne.

Subramanian, S. (2005). Alpha Profit Investment, LLC. Web.

Read more

Risk Assessment: Fault and Event Tree Analysis

Purpose and objectives of fault tree analysis and event tree analysis

Fault tree analysis makes use of Boolean logical functions and graphical methods to identify probable faults and likely failures of any given system, to establish the associated hazard, as well as institute corrective measures to the product. This enhances its safety and hence improves its reliability. This analysis adopts the top-down approach in breaking down the problem. The key purpose of fault tree analysis is to identify the shortcomings in a given product or service to come up with the appropriate solutions to the shortcomings. On the other hand, event tree analysis applies the bottom-up criterion in risk assessment for most management and decision systems. Just like fault tree analysis, the method makes use of Boolean combinational logic in developing the analysis (Rechard, 1999).

Idyllically, a fault tree analysis aims at graphically representing the failure interactions that may lead to a top event that has been defined. It is a tool that is powerful, and which is applied in modeling common-mode failures as well as independent combinations. To that effect, it aims at capturing both human errors and the hardware. Thus, the purpose of a fault tree analysis may include the following: QRA (quantitative risk assessment); identifying combinations of occurrences that may lead to hazardous events; PSA (probabilistic safety assessment); Assessing the safety integrity level and complementing HAZOP studies. Also, it is employed in the safety engineering field to quantitatively find out the likelihood of a safety peril.

On the other hand, an event tree analysis aims at quantifying as well as identifying the outcomes of an initiating-event. This is because its logic is usually represented graphically. As a result, it is capable of illustrating the development of an outcome, which might be in manifold outcomes. It is regarded as a tool that is precious, and which is employed specifically in the critique of the effects that are likely to come up from an undesired occurrence, or a malfunction. Indeed, it is used to model accident scenarios that have several safeguards as protective facets (Dietz, 1998).

Event tree analysis stands out in the sense that several failures can be analyzed simultaneously without foreseeing end events, and weakness of the system can easily be identified for rectification. The weakness of this method is that anticipation of operation pathways is necessary whereby some successes and failures cannot be distinguished as is the case for fault tree analysis. Fault tree analysis serves some purposes such as the provision of qualitative and quantitative formats for evaluation as well as giving a vivid system function description leading to undesirable outcomes. Event and fault tree analysis helps in identifying potential failures, particularly in the manufacturing and processing sectors (Hixenbaugh, 1968).

Comparison with other techniques for reliability and risk assessment

Other forms of risk assessment include Failure Modes and Effects Analysis; Bowtie method; and the use of reliability block diagrams. These forms are closely linked to the fault tree analysis except that whereas the methods are inductive, fault tree analysis assumes a top-down deductive approach capable of breaking down the complexity of any given system (Hixenbaugh, 1968). Failure Mode and Effects analysis apply the bottom up approach with a specific focus on a single element (subsystem) of the entire system. This, therefore, indicates that the Failure Modes and Effects Analysis (FMEA) and fault tree analysis (FTA) are complementary methods, whereby FMEA is used to analyze internal initial faults, and FTA is used for multiple external failures affecting the system (Boud, 1993).

Ideally, as Boud (1993) puts it, “FTA can be employed to illustrate how a system is capable of resisting multiple, or single initiating faults, but it is incapable of determining all probable initiating faults”. On the contrary, “FMEA can be employed to exhaustively catalog the initiating faults, as well as to identify their local impacts” (Boud, 1993). However, it cannot be employed in the assessment of multiple failures, or their impacts at a system-level. Nonetheless, the external events are considered by both ETA and FTA, but not in FMEA. The Success Tree Analysis, which is equivalent to a Dependence Diagram, is the commonsensical inverse of the fault tree analysis because it employs a path to depict a system in place of gates. FTA provides the likelihood of a top-event, while both the Success Tree Analysis and the Dependence Diagram evades a top-event and produces the likelihood of success.

Fault tree analysis is simply a failure critique whereby a Boolean logic that is employed is the assessment of the system’s undesired state, to coalesce a series of events that are at a lower level. On the other hand, a binary tree is based on binary logic whereby an event has either taken place or not taken place. In the fault tree analysis, the steps to be followed are system definition; comprehending the system; defining the top event; fault tree construction; qualitative assessment; assigning reliability data; and quantitative assessment. For instance, the function and boundaries ought to be established in the system definition. During, the construction of the fault tree, the following should be done: the gate symbols and types should be set to represent fault tree logic; a top-down approach ought to be employed; and failure modes ought to be identified.

Nonetheless, in the fault tree analysis, minimal cut sets are vital because they can be applied in checking identified failures that may result in top-event. Furthermore, because complex fault trees can be handled by the use of computer codes, minimal cut sets can be manually derived in circumstances whereby the fault tree is simple. Ideally, equipment reliability data is vital in fault tree analysis because it ensures that the available data is apposite and pertinent. To that effect, repair time/test interval can be employed to derive failure probability, specifically from the rate of failure data.

Superlatively, a fault tree ought to encompass dependent failures as well as apposite human errors. This is because the top event is dominated by them. For instance, the dependent failures can be quantified by several methods, such as partial beta factor, reliability factor, and beta factor. Indeed, the established model or historical data can be employed in quantifying human error. Minimal cut sets can be employed in the quantification of a fault tree by use of AND gates, to combine probabilities and frequencies. The result ought to be reviewed, to ensure that the top event is reasonable.

In the event tree analysis, the steps to be followed are: identifying the initiating event; identifying safeguards and then determining the outcomes; constructing an event tree based on all customers; classifying the outcomes in groups with similar consequences; quantifying branch probabilities; quantifying outcomes; and testing outcomes. Thus, an event tree is simply a graphical illustration of scenarios of events that are likely to result from an initiating-event.

Nevertheless, the frequency of end-states or the frequency-of-outcomes can be identified and quantified using an event tree. Furthermore, an event tree can be combined with a fault-tree as part of the Quantitative Risk Assessment and Probabilistic Safety Assessment. Therefore, the mathematics in an event tree critique is relatively simpler in comparison to the one in the fault tree critique. Moreover, in the event tree critique, the frequency of the initiating event should be equal to the sum of end-state frequencies. As a matter of fact and on their own, an event tree can be used with node probabilities that are uncomplicated or can be applied in combination with fault-trees.

Because a forward logic has been employed in the design of the event tree critique, an inductive-approach is provided to scrutinize reliability. Similarly, a deductive approach is used by the dependability of fault trees because they are designed by defining TOP-events and using backward-logic to delineate causes. The critique of a fault tree is closely correlated that with that of an event tree. This is since the logical procedures that are used in the sequences of an event tree, as well as to quantify the impacts are similar to those that are employed in the fault tree analysis (Campbell, 2003).

Fault tree analysis for a reactor coolant system

A cooling system is a very integral component in an industrial setting because it can be used for safety, protection, and even maintenance of equipment (Fayssal 1990). A typical example is a cooling system that is used in power plants, such as the Pressurized water reactor and the Boiling water reactor. Major components of the system include:

  • The core: This is the central processing part of the plant
  • Pressurizer: Provides and control pressure for the normal working of the system.
  • Steam Generator: Propagates steam in the system
  • Turbines: They have rotary motion providing mechanical power in the system
  • Condensers: Are the main locations for system cooling
  • Heaters: They supply heat in the system in prescribed locations
  • Valves: Are the main control points for the flow of water and steam

The coolant system is a very important component in any given system especially in a nuclear power plant where a lot of heat is generated. The failure of a coolant system thus has fatal implications to the system and an elaborate risk assessment is essential to containing the situation.

Diagrams

Coolant.
Fig. 1 Coolant.
Typical power plant with coolant.
Fig.2 Typical power plant with coolant.
Basic Event Failure Failure Rate Data References Failure Probability
Core Overheating 22.456E-6 NPRD-95 2-217 2.458 E-2
Pressurizer (PZR) Bursts 14.125E-6 NPRD-95 2-221 1.546 E-2
Steam Generator (SG) Breaks down 0.8792E-6 NPRD-95 2-224 9.627 E-4
Reactor coolant pump (RCP) Pump fails 0.1467E-6 NPRD-95 2-163 5.124 E-4
Safety valve (SV) Blockage 1.0264E-6 NPRD-95 2-157 1.124 E-3
Mainsteam isolation valve (MSIV) Blockage 0.0453E-6 NPRD-95 2-157 4.960 E-5
Throttle valve (TV) Valve fails 0.2719E-6 NPRD-95 2-157 2.977 E-4
Moisture Separator Reheater Fails 0.1181E-6 NPRD-95 2-186 1.293 E-4
Main turbine (MT) Breaks down 0.0213E-6 NPRD-95 2-169 2.332 E-5
Turbine LP (TLP) Breaks down 0.4475E-6 NPRD-95 2-168 4.900 E-4
Main condenser (MC) Condenser fails 0.1124E-6 NPRD-95 2-156 1.231 E-4
Condensate pump (CP) Condenser fails 0.2245E-6 NPRD-95 2-156 2.458 E-4
Clean up system (CUS) Residue accumulation 0.1824E-6 NPRD-95 2-114 1.972 E-4
LP heater (LPH) Heater fails 0.1246E-6 NPRD-95 2-148 1.364 E-4
HP heater (HPH) Heater fails 0.1476E-6 NPRD-95 2-148 1.616 E-4
Condensate storage tank (CST) Coil failure 0.1654E-6 NPRD-95 2-156 1.811 E-4
Safety injection system (SIS) System fails 0.5713E-6 NPRD-95 2-157 6.255 E-4
Safeguards pumps (SP) Pump fails 0.6231E-6 NPRD-95 2-163 6.822 E-4
Auxiliary feed water (AFW) Supply cut off 0.7481E-6 NPRD-95 2-152 8.192 E-4

Calculation of failure probability

The test interval has been taken for three months.

The failure rate data is obtained from some sources, such as quantitative risk assessment methods.

Calculations were done based on the formula FP= FRD x time in hours/2

The time interval was taken as 2190 hours (Alber, 1996).

Test interval = (365 x 24) x (3 / 12) = 2190 hours

For example, FP (core) = 22.456E-6*2190/2 = 2.458E-2

Fault Tree Analysis for Reactor Coolant System

The following should be done in the design of a fault tree: the gate symbols and types should be set to represent fault tree logic; a top-down approach ought to be employed; and failure modes ought to be identified. Nonetheless, in the fault tree analysis, minimal cut sets are vital because they can be applied in checking identified failures that may result in top-event. Furthermore, because complex fault trees can be handled by the use of computer codes, minimal cut sets can be manually derived in circumstances whereby the fault tree is simple. Ideally, equipment reliability data is vital in fault tree analysis because it ensures that the available data is apposite and pertinent. To that effect, repair time/test interval can be employed to derive failure probability, specifically from the rate of failure data. A fault tree ought to encompass dependent failures as well as apposite human errors. This is because the top event is dominated by them. For instance, the dependent failures can be quantified by several methods, such as partial beta factor, reliability factor, and beta factor. Indeed, the established model or historical data can be employed in quantifying human error. Minimal cut sets can be employed in the quantification of a fault tree by use of AND gates, to combine probabilities and frequencies. The result ought to be reviewed, to ensure that the top event is reasonable.

In this case, the top event is the reactor coolant system failure. Overheating is likely to be caused by the failure of the reactant coolant system. This may be caused by the reactor system losing its coolant, which can subsequently be followed by the emergency-core cooling system failing to operate; and the reactor protection system may fail to shut down the reactor cooling system during a major fault. The effect of this failure is that it can result in overheating. Thus, to prevent these impacts, several barriers, such as pumps and safety valves ought to be introduced in the system. In this case (as in figure 1 above), the barriers that have been introduced are the Throttle valve, safety valves, and moisture separator re-heater. In figure 1 above, there is a main steam isolation valve that is used to control the points-of-flow of steam. But, in cases whereby the main steam isolation valve fails, there is a throttle valve that can be employed in emergency cases. Moreover, there is a condensate pump, but in case of failure, the main freed-water pump will be used for emergencies. Figure 2 above has a generator, but in case of failure, the steam generator will be used for emergencies.

The OR gate is normally used to sum-up components while the AND gate is employed in getting the product-of-components. In the Fault Tree Diagram, the top event is the coolant system failure. However, for this failure to be real, some things ought to take place. For instance, to make the coolant system fail, the primary or secondary circuit should fail. Nonetheless, to make the primary coolant circuit fail (one of the primary system components should fail which are (core and container failure or heat release from PZR or leak in SG or failure of the RCP) and one of the backup system components which are (failure of the safeguard pump or failure of the safety injection system). Idyllically, for the coolant system to fail, it needs a primary circuit or a secondary circuit. In the primary circuit, to back-up the system, the safeguards pump should fail or the safety injection system should fail. And for both T2 and T3 to fail, turbine 2 should fail and turbine 3 should fail

In some cases, there is standby equipment that is capable of providing feed the moment the primary fails. This equipment is usually started by the following systems, which are usually employed in emergency cases: a safety valve, a back-up system for the primary circuit, and the recovery pump. These components will work the moment the system fails. Superlatively, the other source of failure data might be the right gate 11. This is because the other failure could occur to the system, to make coolant system failure, i.e. the auxiliary feed water.

Fault Tree Analysis for Reactor Coolant System

Definition of gates used.
Definition of gates used.

The cut-set table makes use of AND gates in the computation of the probabilities. Cut set values are obtained by multiplying probabilities of two related components in the system. These cut set values are very vital in fault tree analysis since they show trends for different fault points in a given system.

For example, Core*safeguard pumps= 2.458E-2*6.822E-4 = 1.678E-5. Cut set values for the whole system are found in this manner.

Cut set Probability Cut set Probability
Core. SP 1.678 E-5 AFW. CS 1.6154 E-7
Core.SIS 1.538 E-5 AFW. LPH1 1.1173 E-7
PZR. SP 1.055 E-5 AFW. LPH2 1.1173 E-6
PZR. SIS 9.67 E-6 CST. SG 1.7434 E-6
SG. SP 6.567 E-7 CST, Condenser 2.2293 E-6
SG. SIS 5.928 E-7 CST. MT 4.2232 E-6
RCP. SP 3.495 E-6 CST. MSR 2.3416 E-8
RCP. SIS 3.205 E-7 CST. CP 4.4514 E-8
AFW. SG 7.886 E-7 CST. MFWP 4.2812 E-7
AFW Condenser 1.008 E-8 CST. SV1 2.0355 E-7
AFW. MT 1.911 E-7 CST. SV2 2.0355 E-8
AFW. MSR 1.059 E-6 CST. SV3 2.0355 E-8
AFW. CP 2.014 E-8 CST. MSIV 8.9825 E-7
AFW. MFWP 1.936 E-7 CST. TV 5.3913 E-7
AFW. SV1 9.208 E-6 CST. (T1.T3) 8.8739 E-8
AFW. SV2 9.208 E-6 CST. HPH 2.9265 E-8
AFW. SV3 9.208 E-6 CST. CS 3.5712 E-7
AFW. MSIV 4.063 E-8 CST. LPH1 2.4702 E-8
AFW. TV 2.438 E-7 CST. LPH2 2.4702 E-8
AFW. (T1.T3) 4.014 E-7
AFW. HPH 1.324 E-8
Total Probability 4.675 E-5

Fussel Vessely and Birnbaum

Fussel vessel and Birnbaum values play a critical role in fault tree analysis. These values also indicate the probability of risk in the system.

The Fussel Vesely is obtained by adding all the probabilities containing a specific component in table 2 then dividing by the total probability TP found in table 2 (Ericson, 1999).

For example, (Core.SIS) + (Core. SP)/TP= (1.678+1.538)E-5/4.675E-5=0.678

Birnbaum values are obtained by taking the sum of probability in table 2 and dividing by the specific component probability (Campbell, 2003).

For example, Core/TP= 22.46E-5/4.675E-5=0.483 (Lindsay, 1997).

Basic Event Fussel Vessely Birnbaum
Core 0.687 0.483
Pressurizer (PZR) 0.046 0.018
Steam Generator (SG) 0.094 0.06
Reactor coolant pump (RCP) 0.016 0.014
Safety valve (SV) 0.024 0.002
Main steam isolation valve (MSIV) 0.021 0.055
Throttle valve (TV) 0.014 0.092
Moisture separator reheater (MSR) 0.045 0.084
Main turbine (MTHP) 0.062 0.076
Turbine LP (TLP) 0.076 0.058
Main condenser (MC) 0.038 0.032
Condensate pump (CP) 0.064 0.008
Clean up system (CUS) 0.087 0.012
LP heater (LPH) 0.026 0.014
HP heater (HPH) 0.042 0.026
condensate storage tank (CST) 0.065 0.045
safety injection system (SIS) 0.072 0.033
safeguards pumps (SP) 0.014 0.017
auxiliary feed water 0.541 0.034

FTA Conclusion

In conclusion, which is based on the assessment results, the top event is the reactor coolant system failure. However, for this failure to be real, some things ought to take place. The probability values were used while developing this fault tree. Fundamentally, the product of components was got by the AND gate, and the sum of components was got by the OR gate. Furthermore, the most probable cause of the top event was the cut set values. This is since they are deemed to be vital in the critique of a fault tree due to their capability trends of multiple fault points of a given system. Accordingly, the other source of failure data might be the right gate 11. And finally, this system can be improved by accurately computing the Birnbaum values, as well as the Fussel vessel because of their vital role in the critique of a fault tree.

Event Tree Analysis of Plant Hazard

Event Tree Analysis of Plant Hazard

This analysis is based on the event of core failure as discussed below.

One of the coolant system failures is the core melt and explosion. The core can melt the moment a relentless, compounded failure of a system or components makes the reactor-core to stop being cooled properly, thus making its assemblies to be overheated and/or melt. This may cause them to explode. Nevertheless, the reactor biological impacts of ionizing-radiation can be rendered by a core meltdown. Moreover, a core melt can lead to pressure release, as well as make the reactor to be unusable till it is repaired. As result, the operator will incur additional expenses.

Typically, the event tree analysis is created to illustrate the various impacts of a core melt. For instance, if someone follows a single line from top-event and reaches one of the impacts of the core melt, such as pressure release, then the probability of occurrence is 50 percent. This is mainly due to valve failure, and, thus, it will be open. The probability of a valve failure to take place is 10 percent. Thus, if it occurs, it could result in pressure reduction. The probability of reducing pressure is 10 percent. This will eventually result in the core being overheated and its probability will be 7 E-7. Accordingly, the end states ought to be combined with similar impacts to get the frequency of the core melt as well as the frequency of the explosion.

ETA Conclusion

Basing on the results, the impact with the highest probability is pressure reduction. Its probability is 50 percent. This is because the valve would have failed and thus left open. A core melt can lead to pressure release, as well as make the reactor to be unusable till it is repaired. As result, the operator will incur additional expenses or effort to prevent this from taking place or to repair it.

Discussion

The event tree analysis makes use of both the failure probabilities and determining failure frequencies (Eckberg, 1964). Each component is analyzed in-depth to evaluate the occurrences taking place in case of a failure. The above case sequentially analyses the events that follow the likely core failure due to melting and explosion. For instance, failure rate data for the specific core is given as 1E-6. In case the core fails due to melting and explosion, the system may be affected in various ways. Pressure may be released prematurely, and the valve may either fail to open or close. Furthermore, the amount of pressure may increase or decrease disproportionately, affecting other parts of the system (DeLong, 1970). The control and protection system may fail to detect the failure in a good time, which is again dangerous to the entire system. Event tree analysis here can be used to highlight likely risks associated with core failure to the system (Cammisa, 1995).

While constructing an event tree, we begin with the initiating event that is on the left-hand side and then determine factors across the top. Accordingly, these factors ought to be arranged in a time-based order. Subsequently, the node branches, as well as the logic ought to be determined. And finally, outcomes should be determined and then be classified into similar categories. The initiating event is usually at the top of a fault-tree, and in this case, it is the System Coolant Failure. But, the identified protection is likely to fail, and that is why this event tree starts with a core melt and event explosion (see diagram above). As shown in the diagram above, the end probability for the given situation is obtained by getting the product of individual probabilities leading to the ultimate consequence, which is the product core heating and the failure frequency of the core (Begley, 1968).

CALCULATION: = (0.1 x 0.1 x 0.5) x 1E-6 = 7E-7

Conclusion

Fault and event tree analysis are key methods in risk assessment, especially in identifying the most probable causes of failure and giving details of the multiple failures (Acharya, et al.1990). The methods are thus very important in formulating the possible remedies to the foreseen failures.

Reference List

Acharya, Sarbes, et. al., 1990. (pdf) Severe Accident Risks: An Assessment for Five U.S.Nuclear Power Plants. Wasthington, DC: U.S. Nuclear Regulatory Commission.

Alber, J., 1996. Fault Tree for Safety. Paris: Cooperation and Development.

Begley, T. F., 1968. Cummings. Fault Tree for Safety.

Boud, D., 1993. Fault Tree Analysis Program Pla. Bristol, PA: Open Univeristy Press.

Cammisa, A., 1995. Fault Tree for Safety. Westport, CT: Praeger.

Campbell, A., 2003. Risk Analysis. Princeton, NJ: Princeton University Press.

DeLong, Thomas, 1970. A Fault Tree Manual. Master’s Thesis (Texas A&M University).

Dietz, M.E., 1998. An Overview of Quantitative Risk Assessment Methods. Victoria: Hawker Brownlow Education.

Eckberg, C. R., 1964. Fault Tree Analysis Program Plan. Seattle, WA: The Boeing Company.

Ericson, Clifton, 1999. Fault Tree Analysis – A History. Proceedings of the 17th International Systems Safety Conference.

Fayssal, Safie, 2000. An Overview of Quantitative Risk Assessment Methods/MSFC.

Hixenbaugh, A. F., 1968. Fault Tree for Safety. Seattle, WA: The Boeing Company.

Launer, L.J., 2005. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants.

Lindsay, J., 1997. Fault Tree Analysis Program Plan. Stroke, 28, pp.526-30.

Rechard, Robert P., 1999. Historical Relationship between Performance Assessment for Radioactive Waste Disposal and Other Types of Risk Assessment in the United States.Risk Analysis (Springer Netherlands).

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp