Technology-Education Relationship, Pros and Cons

Introduction

The 21st century is characterized by significant growth in technology—one of the technologies which have undergone significant growth in Information Communication Technology (ICT). According to Inoue (2007, p.7), the growth in ICT has revolutionized other economic sectors, such as the education sector. This research paper is aimed at evaluating the relationship between technology and education.

Accessibility

Technology has currently made it possible for educational opportunities to be accessible to everyone, anywhere, and at any time. Advancement in technology has made it possible to deliver education to a different student population. One such technology is the Early English Books Online (ProQuest, 2011, para. 2). The use of technology in schools is mainly utilized in higher education. Additionally, accessing education using various technologies is more advanced in developed countries such as the US, the UK, and Japan compared to the developing countries. This is since the developed countries have a well-implemented ICT network.

Katsirikou and Skiadas (2010, p.367) believe that governments within the developing countries are striving to revolutionalize their education system by integrating ICT. According to a study conducted in 2008 in the US, the ratio of student access to internet-connected instructional computers was 3.8 to 1 (Orey, Jones & Branch, 2010, p. 37). On the other hand, educators have relatively high access to technology at 94% through their personal and instructional computers (Navarro & Washington, 2008, p.153).

Over the past decade, there has been an increment in the number of state and federal initiatives developed to increase technology access in schools in the US. Examples of such initiatives include the E-Rate Discount Program and Technology Literacy Challenge Fund. Other initiatives, including National Center for Education Statistics and the Fast Response Survey Systems, were developed to evaluate the effectiveness of technology in schools (Burns, Farris, Skinner & Parsad, n.d, p.1).

Use

Students use technology in their learning process in several ways. For example, the internet enables students to search for information necessary to complete their projects. The internet provides a wide source of academic materials such as e-books and journals. Additionally, students also use technology such as telecommunication technologies, computer-generated simulations to collaborate with different scholars (ProQuest, 2011, para. 3).

Another technology that is increasingly being used by both students and educators in distance learning is video teleconferencing (Hayes & Wynyard, 2006, p.89). Video teleconferencing is cost-effective in that it enables both instructors and students to save on travel costs. Students also use different technologies to collect data, process the data, and present the findings of their study. Technology is also used by students to solve diverse academic problems and make effective decisions (Adams & Hamm, 2006, p. 223).

On the other hand, educators use technology in different ways to teach their students. One of the technologies commonly used by educators is the internet. The internet acts as a major source of teaching resources. Teachers use the internet to search for innovative curriculums to teach their students. Also, the internet provides different creative lesson plans that teachers can use. Teachers are increasingly using computer technology in creating quizzes and worksheets.

According to MacGregor (2004, p.52), technology is also used when issuing instructions to the students. Examples of instructional technologies currently being used include personal computers, projectors, mini-boards, and smartboard technologies. The projector enables teachers to illustrate their explanations using PowerPoint presentations. On the other hand, mini boards and smart boards enable teachers to draw digital graphs and tables to illustrate a certain concept. Student can use their personal computers to research a certain concept. Assessment technologies are also used by teachers to evaluate the performance of their students.

Considering the high rate of technological innovation, especially in ICT, the future of education is bright. There is a high probability of the education sector undergoing a significant revolution. This arises from the fact that other technologies, such as emerging social communication networks, will be incorporated into the education system. The resultant effect is that the learning process will become more interactive, thus improving students’ learning potential.

Impact

The integration of technology within the education sector has several positive impacts. Technology enhances the instruction method used by instructors by making it to be more student-centered. As a result, it encourages cooperation, creativity, high-order thinking, and interaction amongst the students hence improving their performance. Other technologies, such as the internet, play a vital role by providing both the students and educators with a wide range of educational materials. This means that technology enhances the acquisition of knowledge. Technology also plays a critical role in changing student’s attitudes towards education.

Despite the positive impact of technology on education, there are several negative impacts. For example, overuse of technology may limit learning amongst some students. This is because some students acquire knowledge effectively through physical means and interacting with others. Overusing technology, such as teleconferencing, may limit the rate of interaction. Additionally, if not monitored, some students may use their personal computers to play games, which means that a lot of learning time is wasted. Using technology may also take away a significant amount of learning time in the event of a technical problem.

Conclusion

The analysis has illustrated that integrating technology in the education sector can significantly improve the learning process. This is because technology is beneficial to both the students and the educators. To educators, technology makes teaching to be more effective and efficient. On the other hand, the learning environment for the students is improved. However, there are negative impacts associated with incorporating technology in education.

For example, some students may use technology, such as their personal computers, to play games. This means that a lot of time is wasted. Additionally, technology may limit physical interaction amongst the students and their teachers. Therefore, it is vital for the parties involved to limit technology utilization and implement effective monitoring strategies.

Reference List

Adams, D. & Hamms, M. (2006). Media and literacy; learning in the information age-issues, ideas, and teaching strategies. Springfield, III: Thomas.

Burns, S., Farris, E., Skinner, R. & Parsad, B. (n.d). Advanced telecommunications in US; private schools 1998-1999. New York: Diane Publishing.

Hayes, D. & Wynyard, R. (2006). The McDonaldization of higher education. Greenwich, Conn: Information Age Pub.

Inoue, Y. (2007). Technology and diversity in higher education: new challenges. Hershey, PA: Information Science Pub.

MacGregor, D. (2004). Literacy software saves struggling readers. The Journal. Vol. 32, issue 4, p. 52. EBSCO: Georgia: Georgia State University.

ProQuest. (2011). EEBO interactions’ Web 2.0 technology creates a collaborative community amongst researchers and scholars. Web.

Katsirikou, A. & Skiadas, C. (2010). Qualitative and quantitative method in libraries; theory and application; proceedings of the international conference. Singapore: World Scientific.

Navarro, M. & Washington, G. (2008). Support mechanisms necessary for the implementation of technology by teachers. New York: ProQuest.

Orey, M., Jones, S. & Branch, R. (2010). Education media and technology yearbook. New York: Springer.

Read more

Planning and Work Breakdown in Project Management

Project planning and the importance of its elements

Project planning is a managerial tool used to schedule and report the progress of a certain project (Wallace & Henry, 2002). Every manager should prepare a project chart before starting a project because it serves as a timeline for the entire plan to be documented. This can be represented in the form of a gnat chart, which demonstrates the breakdown structure, project plans, workloads, and the time frame. Once the project schedule is accepted by the project management, it becomes the baseline schedule that will form a reference point in relation to the real progress in the field. Matrix management is an element of project planning that is practiced where an individual is reporting to more than one line.

More common though is the fact that it is used to describe the management of cross-functional and other forms of work, which cross the traditional vertical business structures. In matrix management, work assignments are allocated to people who possess the same work skills, and this brings about a situation where more than one manager exists. Here, it becomes easy for the manager to supervise all aspects of the projects, whilst beating the deadline and quality of the work.

Earned value management (EVM) or performance management is a technique for computing and quantifying the project performance and following its progress in an objective way. It is able to measure the costs, scope, and schedule. The technique outlines a precise forecast of the progress of the project based on the prevailing challenges and problems making it an important control tool for project managers.

More recent research findings show that EMV and its principles are positive indicators of the success of the project (Solomon & Ralph, 2006). However, some of its limitations include the incapacity to ascertain the project quality so that it can specify a project that is under budget and ahead of the scheduled scope, but still indicates the customers as being discontented. In addition, EVM should be consistent with the project plan thus limiting its usage in research by establishing the agile software for project managers. This has made room for research whereby the application of EVM in fast-changing environments is a current area of study in project management research (Peter & Morris, 2004).

Benefits of work break down structure, project organization, and word packages

Work breakdown structure is an orderly and incremental division of the project into phases, deliverables, and work packages (Wallace & Henry, 2002). Project managers use it as a control tool for planning the project, implementing the phases, and determining success. Therefore, successful projects are guided by a breakdown structure, because of the following advantages. Firstly, a work breakdown structure provides a basis through which the subsidiary costs of the project phases and roles can be determined.

This helps in acting as a check-off and for assembling requirements for a project to take off. Secondly, through the technique, every individual terminal element can be assigned to be acted upon independently in the breakdown structure. Furthermore, the breakdown structure provides an opportunity to transfer the requirements of a given phase to the other during the project implementation process. In other words, it makes managers plan for their work more effectively and efficiently.

It helps in ensuring that planning for the project is consistent and this provides for effective project execution. Its main purpose is to reduce complicated activities and break them down into a collection of tasks. This in turn makes it easy for the project manager to oversee the tasks more easily than when they were a complex lot before. On cost issues, breakdown structures help in distributing the project budget specific to the tasks to be executed and this ensures that the total cost of the tasks does not exceed the total project cost, which could lead to failure of the project (Wallace & Henry, 2002).

Moreover, it makes the tracking of the project easier basing on the fact that in a work breakdown structure tasks are clearly defined. In addition, work breakdown helps define the scope and this leads to the project team completing all the tasks with no additional work to worry about. In a broad sense, all the small tasks that are done and fulfilled add up to a fully functional project, which is a major criterion for any project. Finally, the breakdown structure streamlines the assigning of responsibility, and the task manager responsible for the project uses it to ensure it’s completed on time, within the budget, and everything including the project is done.

An example is the performance measurement analysis, which measures the size of any variable that will invariably occur. The variance that occurs eventually calls for a corrective action to be executed. The earned value technique uses the cost control as highlighted in the project plan to assess the progress of the project and the size and impact of any variations that may occur in the course of executing the project. It, therefore, involves the formulation of each activity, work package, and any control account (Peter & Morris, 2004).

Considering the nature of the project and the work plan, the project is set to roll without major hitches as the plan at hand is to be as flexible as possible and based on the change control procedures that the team is meant to be taken through during the rigorous training period the laid out plan for the project is expected to be fruitful. The change control and the use of up-to-date methods of evaluation are meant to see the project through.

Emphasis on procedure observation and specific work allocation with a clear work description and allocation and overseeing of tasks with organized hierarchy is expected to give the project a strong start and a perfect finish. It is also expected that in the course of the implementation of the project, a series of evaluation meetings to review the progress, challenges, way forward, and effecting of the necessary changes will be done.

Apart from any arising issues that may come up during the course of the implementation, the project is expected to proceed as planned due to the nature of preparations that have gone a long way to ensure that the project has come this far. To justify all his, the implementation is keen on factoring the necessary changes as will be required, and as the situation will deem appropriate. Therefore, it is important for any project manager to develop a breakdown structure because it acts as a baseline schedule for the project. Here, every activity undertaken at the project level is measured against the set target.

References

Peter, W., & Morris, G. (2004). The Management of Projects. London: Thomas Telford.

Solomon, P., & Ralph, Y. (2006). Performance-Based Earned Value. London: Wiley- IEEE Computer Society.

Wallace, C., & Henry, G. (2002). The Gantt chart, a working tool of management. New York: Ronald Press.

Read more

Information Security Management Practices: Risk Assessment

Executive summary

Risk assessment was crucial for the TSF foundation to determine the vulnerabilities and risk re-mediation techniques to ensure that the foundation’s information assets were protected against being compromised by illegitimate and unauthorised persons. An assessment of the three situations revealed different threats, vulnerabilities, and various risk impacts. It was established that the web-based information system security based on OpenSSL was vulnerable to the adverse effects of Heartbleed vulnerability, and hackers, unauthorised access to the organisation’s information systems, loss of data confidentiality, lack of management commitment to system security, and hacker activities. Some of the recommendations to mitigate the risks include top management support, application support and updates, use of public key infrastructure, use of encryption technologies, password creation and renewal policies, use of firewalls, and regular renewal of certificates and revocation through the certificate authorities.

Introduction

This an extensive risk assessment report on three cases with Sprout Foundation (TSF) that has a financial turnover of roughly AUD230 million per annum. The organisation has offices at different locations interconnected on the Internet with web enabled communication devices and some offices are remotely located because of the nature of the area of operation. According to Akintoye and MacLeod (1997), each country of operation is governed by different regulations such as data privacy laws, data protection acts, and other laws and regulations governing the distribution, sharing, and use of data (Da Veiga & Eloff 2010). The non-profit organisation’s transactions are conducted on a distributed web based platform. However, it has been established that emerging threats expose the backup because of the vulnerabilities with the systems (Sun, Srivastava & Mock 2006). In addition, the potential security breaches that happened within the payment gateway that was designed to incorporate OpenSSL being exposed to the HeartBleed vulnerability (Stoneburner, Goguen & Feringa 2002).

A review of the standards used

The three situations were best analysed using the Payment Card Industry (PCI), the Data Protection act 2014, and the National Institute of Standards and Technology (NIST) standard (Bandyopadhyay, Mykytyn & Mykytyn 1999). On the other hand, the organisation was defined in the case study as a tier-2 not-for-profit despite having some inherent characteristics that fall into tier-1 not-for-profit category (Spears & Barki 2010).

The Sprout Foundation Case Study and Emerging Problems

The suggested mitigation strategies include remote backups of data to ensure data availability in case of disasters such as those that affect the environment, adopting cloud based services for remote data storage, adopting high speed wide bandwidth data communication broadband, and educating employees on their responsibilities to safeguard the information assets that are vulnerable to external and internal information security risks. The rational for adopting the strategies was to ensure that remote data using high speed communication channels could make it available and support real time storage. Using OpenSSL vulnerabilities such as Multiblock corrupted pointer is a technical approach that could enable the confidentiality and integrity of data when being backed up.

Data availability could be by making regular backups and that could enable the institution to make the data available for auditing purposes under the Australian regulations for donor funds. The response to this incident is to opt for a cloud based back up service provider who is well established in the industry to the services is easily accessible and available when needed. Consider hot and cold backups strategies that can be done onsite because of the sensitivity of the data to ensure that access to the data is maintained when the service provider is not accessible by procuring the backup hardware and software devices. Backup critical systems in real time based on hot storage and cold storage strategies through time synchronization and transmission of data to the requisite authorities for compliance with the Australian standards for auditing instead of waiting to retrieve the data for auditing at the last minute. In addition, regular system updated could counter on OpenSSL vulnerabilities successfully.

Case Study: Emerging Heartbleed Problems-Mitigation strategies work

The mitigation strategies for TFS’s vulnerability problems include ensuring that employees do not use personal devices on organization’s information systems that could ensure data sensitive corporate data is safe, implementation of remote access controls on the sever side and client side to ensure secure data transmission because of the multiple platforms on which donor funds are transferred using the TSF-ONE corporate system. Regularly patching and updating the TSF-ONE system to address application flaws, educating employees on their roles and responsibilities to protect TFS’s information systems, identifying vendors with reliable software tools such as those used to encrypt data. Assigning privileges enables non-repudiation, authenticity, confidentiality, and integrity of data and controlled access to data by those in the management positions.

To ensure the integrity, availability, and confidentiality of data is maintained, it is worth considering using data encryption technologies to mitigate the effects of the Heartbleed problem that introduces a security vulnerability to compromise the ability to authenticate, ensure data confidentiality, data integrity, and authorization of access privileges. Encryption makes the remotely stored data be safe because those with decryption key can make legitimate access to the information. In addition, it is necessary to use the public key infrastructure and certificate keys so that confidential data can only be accessed by those authorised to do so. Certificate authorities can be revoked and cancelled and the keys can be cancelled to ensure that sensitive data is ley safe in storage. A combination of certificate authorities, public key infrastructure, and encryption technologies could make the data safe while hot and real time auditing of the data could enable the organisation to avoid a conflict with the Australian authorities.

Methodology

System characterisation

System characterisation was done to collect data to determine the conditions for each component of the TSF’s assets that provide the operational capabilities to transact on both internal and external information systems. In accordance with Halliday, Badenhorst and Von Solms (1996), software application protocols and data users were also identified ion the three cases. The TSF foundation operates on a client server environment that consists of Windows 2000 and MS SQL 2000 operating systems with the systems functioning on multiple physical locations. The TSF system’s protection was based on software and hardware firewalls, universal Threat Management Systems (UTM), and routers on the intranet. In addition, the web server security was based on OpenSSL and supports different encryption technologies because it is an inclusion of open-source implementation of the TLS and SSL protocols (Feng & Li 2011). Here several sources of threats are discussed.

People

Ericsson (2010) maintains that people are the weakest link in any information system security program and form the first line of sources of threats because of the high susceptibility to commit intentional and unintentional errors. In general, the three levels of people who interact with the information systems include the CEO, CFO, and the employees (Feng & Li 2011)

The operational environment

  1. Functionality of the IT systems
  2. No support technicians
  3. Information security policies that are system specific, issues specific, and organisation specific for the different levels of threats for the TSF foundation.
  4. The information security architecture of the TSF institution.
  5. TSF uses an intranet for internal and external communication
  6. Uses information systems that have built-in security products
  7. The operational controls.

Threat identification

A threat is defined as the potential to exploit vulnerabilities within the operable TSF information system assets. According to Bahli and Rivard (2003), different sources of threats exists that need to be identified and ameliorated using a specific risk mitigation strategy.

Sources of threats

Natural threats

The sources of threats that could affect the TSF foundation include natural sources because the foundation operates in marginalised areas that are vulnerable to natural disasters such as floods, tornadoes, earthquakes, and electrical storms.

Human threats

Human sources of threats can either be internal or external to the TSF foundation. Internal human threats consist of employees who work for the foundation. External threats include the hackers, network based attacks such as denial of service attacks, attacks on the OpenSSL (Appari & Johnson 2010).

Environmental threats

According to Appari and Johnson (2010), the TSF foundation was vulnerable to environmental threats which include constant power outages because of poor electricity infrastructure and the effects of pollutants from different factories and other sources that could not be mitigated.

Technical threats

Technical threats were the result of disgruntled people within the institutions and external intruders wanting to gain unauthorised access into the information systems.

Methodology

Table 2: components of risks, vulnerabilities, and threats.

Risk components Vulnerability components Threat components
Unauthorised access to the organisation’s network by dialling into the network
Access can either be logical or physical access.
Lack of accountability identification and authentication procedures that led to identity theft, lack of system specific, and issues specific policies, and unauthorised access Internal environment including employees and managers, software vendors, products and other third part people
Use of malicious code to gain access to financial transactions of the organisation by gaining access to online financial transactions leading to financial data breaches Using Credit card to transfer money from the donor to the TSF funds (PCIDDS), Heartbleed actions on OpenSSL Vulnerability in the card holder’s environment, the environment includes hackers, and terrorists, OpenSSL cryptography, missing bounds check, vulnerability exposure such as CVE-2014-0160, stealing and using security keys.
The action involved transferring funds over the network without the authority of the responsible management TSF’s data recovery planning in the hands of new company in the market Internal Employees or third party agents having known how backup of data was done
Exploitation of system vulnerabilities to compromise the system to access sensitive information, OpenSSL, the web based attacks such as Bugfix and deployment and compromise of X.509 certificates. Backdoors can vendors who distribute the software with the knowledge and documentation of the software flaws within the system.
Use of scans/probes/attempted access
Users not authorised to access the system or use organisational assets (e.g., hackers,
Terminated workers, terrorists and
Computer
Criminals)
Successfully attacking and compromising the
Web page of the organisation, breach confidentiality
The management and those responsible for securing the information systems were not interested in enforcing security controls. Slow action by the
Federal Police, CEO, CFO, vulnerability of donor information and inaction of the executive team and backup of inaccessibility that can be used for unauthorised access
Insidious information by the partners and other stakeholders working with the TSF foundation
Data loss Australian laws, compliance to different standards and laws of each country of operation might cause the system to be slow and lead to the breach of confidentiality. For the second case, loss of backed up data could lead to financial loss of lack of accountability where the data could be stored because of insolvency. In addition, if the bankrupt organisation sells its servers to other organisations, the confidential data of other organisations might be lost.
The third case is about the
Heartbleed vulnerability

Table 1: Risk assessment.

Step Description Inputs Outputs
1
2 Risk analysis Risk identification Risk identification document
3 Characterising the system
  • People, Processes
  • Technology (hardware software)
  • Data and information
  • CEO, CFO, Exec team, and Senior management, software vendors
  • Functions of the system
  • Boundaries of the system
  • Criticality of system and information data
  • Data sensitivity
4 Identification of threats Documented data from on the information security of the system such as Heartbleed hardware, software, Routers, Universal threat management systems (UTM), Firewalls, Encryption technologies , Public Key Infrastructure (PKI),. Threat statement
5 Vulnerability Identification Documentation of previous security assessment report, audit data, vulnerability assessments List of vulnerabilities and risk assessment reports.
6 Control Analysis Current controls applied based on the PCI and other regulations and standards. Statement of existing and planned controls
7 Likelihood Determination Threat sources and motivation to act as a threat agent and the effects on existing controls. Rating of the potential likelihood
8 Impact Analysis Single point of failure, Hardware Issues/Equipment Failure or loss, compromise on Integrity, Availability and Confidentiality of data in storage and on transit, system administration practises. Impact rating statement
9 Risk Determination The effect of the threats on the information system, effectiveness of the controls used, customer practises, inadequate application support, data disclosure. Statement of risk levels

Risk analysis

Risk Risk mitigation
1 Environmental risks Physical relocation of information systems to saver places
2 Web based risks (cross site scripting, heartBleed) on the OpenSSL cryptography library Certificate renewal and revocation.
3 information systems inherent flaws System patches and regular updates
4 Lack of management support and IT security skills Training on security awareness and the need for management support
5 Remote access controls of the sever side and client side Configuration management
6 Data availability because of service provider insolvency Contractual agreement with the service provider, on-site and off-site backups.
7 Data loss and theft from server because of remote data storage Employing trusted party.
8 Hardware and software failures because of natural disasters. Cloud computing as an option
9 Internal threats from disgruntled employees Training, policies, standards for information access and data use.
10 Data disclosure because of Australian laws Compliance with data privacy laws, and other data protection acts and standards.
11 Inadequate technical controls such as firewalls Installation of firewalls and antimalware programs.
12 Loss of data because of remote service provider failure to offer services. Considering cloud computing solutions
13 Stakeholder dissatisfaction Assurance of Compliance with information security standards such as PCISS
14 Non-repudiation, authenticity and identification problems because of lack of validation and verification procedures when accessing system resources Incorporate the sue of validation and verification checks
15 Lack of operational issues specific, system specific, and organisation IT security policies Clarification of IT security policies and requirements for compliance
16 OpenSSL vulnerabilities such as Multiblock corrupted pointer, Encryption of data on transit.
17 Phishing Information security policy
18 Lack of effective password user policies (alphanumeric) Password use policy (combination of upper case, and alphanumeric and regularly change the passwords)
19 Cross site scripting such as User accounts being stolen, credentials misused, keystroke logging, content theft, visitor browser stealing, sever bandwidth use, data theft Create and implement data protection and application use polices, Interface and API, additional hardware installation
20 Web server running compromise Limit use of web server, authenticate users, access control lists, and Secure Sockets Layer (SSL), and use appropriate ports.
21 Patches to correct flaws in operating system software not installed. Regular patches and updates.
22 Remote access to server console not properly monitored Set remote access to the right configurations.
23 Internal access to server Contractual agreement with the remote service provider
24 System administration practises not defined
25 Poor Physical Security Implement physical controls
26 Loss of documentation software-compromising confidentiality and integrity of data Safe storage of documentation
27 Inadequate system logging (event and system use information lacking),
28 Lack of encryption and keys Use encryption technologies and public key infrastructure (PKI) for key generation.
29 No clear segregation of duties Assign roles and responsibilities to ensure mitigation strategies are dedicated to a specific role
30 Inadequate Applications Support Benchmark of quality, use best practices to procure software from known vendors.

Risk control

Risk controls are an approach that is fundamental in securing an information system. Controls are necessary to ensure practises are in place to mitigate the risks for the three situations that were encountered by TSF foundation.

Table 3: Risk controls strategies.

Vulnerability Threats Existing controls Additional actions to reduce threats
Lack of accountability, identification and authentication procedures leading to identity theft.
Lacking system specific, and issues specific policies.
Internal environment such as TSF employees None Educate users to account for donor funds, use of public key infrastructure, and avoidance of behaviour that could encourage people to act as the source of threats.
Using Credit card to transfer money from the donor to the TSF funds(PCIDDS), Heartbleed actions on OpenSSL The environment includes hackers, and terrorists. OpenSSL cryptography, missing bounds check, vulnerability exposure such as CVE-2014-0160, stealing and using security keys. Use of OpenSSL, use of firewalls on the network, and Universal Threat Management system (UTM), and other application level controls Educating people on information security awareness
Different regulatory requirements, information security standards Dissatisfied employees, unskilled employees None
Reactive management instead of being proactive.
Train the employees and the management of different laws, standards for data protection, privacy, and how to react or respond to suspicious activities.
New programs for the improvement of the communities Vendors, stakeholders, and partners with TSF. Limited roles of the organisation and responsibility lies with the target organisation Accountability, non-repudiation, and reliable reporting.
A management that is reactive to incidents such as the news of the insolvency of the remote service provider or data backup services and the new Unauthorized users
(e.g., hackers, terminated
employees, computer
criminals, terrorists)
None Consider using the Public Key Infrastructure (PKI), use of a Secure Reverse Proxy Solution, use of a single location to manage the keys, ensuring secure storage of keys, ensuring that memory bound are used to ensure system security, use of a single location to re-mediate issues, and using secure gateways for each system user.
Patch OpenSSL libraries, generate both private and public keys, issue advisories, and ensure that X.509 revocation checks are applied on all servers.
Use of personal computing and memory devices. Employees and third parties who have access to the information systems None Write a policy that prohibits the use of external devices and memory sticks without the authority of the management.
Funds transfer using credit cards computer criminals, and disgruntled employees. Enabled financial transactions on the web, with security enforced using OpenSSL rules. Compliance required for the card with PCISSS, create a certificate authority, generate server certificates and validate the trusts to ensure authentic communication.
A different company used for recovery of TSF data, backup inaccessibility, and reactive remediation. Employees who belong to the non-profit organisation, vendors, and backdoors, and malicious code. Off-site data back pups done regularly in every 30 days. Should consider cloud services and frequent data backups on off-site sites.
Failure to apply new system and application patches Unauthorized users
(e.g., hackers, terminated
employees, computer
criminals, terrorists)
None Appropriate to implement security controls such as patching the systems with new updates.

Control analysis

Controls analysis deals with an analysis of the threat levels and the impact each threat could have on the assets of the TSF non-profit foundation. The IT Security Roles & Responsibilities have not been clearly outlined because when the threat on the OpenSSL happened, the response was reactive from the management. In addition, there is lack of data classification on confidential data such as passwords and credit cards.

Level Likelihood Definition
High When the threat is high
Moderate A moderate threat is one that can be ameliorated using current security controls.
Low The threats source does not have the potential to cause harm.

Impact analysis

Table 4. Magnitude of Impact Definitions for Sprout Foundation.

Risk
No.
Risk Summary Risk Impact Risk Impact Rating
1 Compromised confidentiality, integrity and availability of data Disclosing information without being authorised to do so High
2 Exploitation of the flaws inherent in OpenSSL Using the HeartBleed vulnerability exploitation High
3 Remote access by carrying out financial transaction services on the web. Unauthorized disclosure and modification of data. High
4 Successfully compromise the firewalls and anti-malware programs Leading to illegitimate access and unauthorized disclosure of information. High
5 Data loss because of server access or unavailability. Leading to illegitimate access and unauthorized disclosure of information. High
6 Hardware Issues/Equipment Failure or loss Leading to illegitimate access and unauthorized disclosure of information. Low
7 Single Point of Failure Because of compromised website, defaced website. Low
8 Lack of information system administration capabilities Leading to illegitimate access and unauthorized disclosure of information. Low
9 Overdependence on the key person Poor system support Lo
10 Critical documentation lost because of remote server unavailability Confidentiality and integrity of corporate data could be compromised. Low
11 Clear Text Transmission of Critical Data Confidentiality of corporate data could be compromised. Low

Risk Determination

  1. Web based communication in the presence of the OpenSSL enabled pages.
  2. Accounting for funds
  3. Conducting financial transactions on mobile devices
  4. Overhaul of the system
  5. Storage of data on third party servers
  6. Unavailability of information because of insolvency of the service provider

Risk Analysis Report

Risk Impact Controls in place (Yes-Y, No-N) Recommendation
1 Environmental risks H Y Implement policy
2 Web based risks (cross site scripting, heartBleed) on the OpenSSL cryptography library H N Certificate renewal and revocation.
3 Flaws in the information systems M N System patches and regular updates
4 Lack of management support and IT security skills H N Training on security awareness and the need for management support
5 Remote access controls of the sever side and client side H N Configuration management
6 Data availability because of service provider insolvency M N Contractual agreement with the service provider, on-site and off-site backups.
7 Data loss and theft from server because of remote data storage L N Employing trusted party.
8 Hardware and software failures because of natural disasters. L N Cloud computing option
9 Internal threats H Y Training, policies, standards for information access and data use.
10 Data disclosure because of Australian laws M Y Compliance with data privacy laws, and other data protection acts and
11 Inadequate technical controls such as firewalls M N Installation of firewalls and anti-malware programs.
12 Loss of data because of remote service provider failure to offer services. H N Considering cloud computing solutions
13 Stakeholder dissatisfaction M N Assurance of Compliance with information security standards such as PCISS
14 Non-repudiation, authenticity and identification problems because of lack of validation and verification procedures when accessing system resources M Y Incorporate the sue of validation and verification checks
15 Lack of operational issues specific, system specific, and organisation IT security policies H N Clarification of IT security policies and requirements for compliance
16 OpenSSL vulnerabilities such as Multiblock corrupted pointer H N Encryption of data on transit.
17 Hackers H N
18 Lack of effective password user policies (alphanumeric) H Y Password use policy (combination of upper case, and alphanumeric and regularly change the passwords)
19 Cross site scripting such as User accounts being stolen, credentials misused, keystroke logging, content theft, visitor browser stealing, sever bandwidth use, data theft H N Use API policies
20 Web server running compromise M N Use standard authentication methods
21 Patches to correct flaws in operating system software not installed. M N Patches and updates
22 Remote access to server console not properly monitored M Y Configure system properly
23 Internal access to server H N Contractual agreement with the remote service provider
24 System administration practises not defined M N Teach system administration skills
25 Poor Physical Security M Y Implement physical controls
26 Loss of documentation software-compromising confidentiality and integrity of data H N Safe storage of documentation
27 Inadequate system logging (event and system use information lacking), M N Use Log management system
28 Lack of encryption keys M N Encryption keys
29 No clear segregation of duties M Y Assign dedicated roles
30 Inadequate Applications Support M N Update vendor software

References

Akintoye, A S & MacLeod, M J 1997 ‘Risk analysis and management in construction’, International journal of project management, vol. 1, no. 15, pp. 31-38

Appari, A & Johnson, M E 2010, ‘Information security and privacy in healthcare: current state of research’, International journal of Internet and enterprise management,’ vol. 4, no. 6, pp. 279-314.

Bahli, B & Rivard, S 2003, ‘The information technology outsourcing risk: a transaction cost and agency theory‐based perspective’, Journal of Information Technology, vol. 3, no. 18, pp. 211-221.

Bandyopadhyay, K, Mykytyn, P P & Mykytyn, K 1999,’ A framework for integrated risk management in information technology’, Management Decision, vol. 5, no. 37, pp. 437-445.

Da Veiga, A & Eloff, J H 2010, ‘A framework and assessment instrument for information security culture’, Computers & Security, vol. 2, no, 29, pp. 196-207.

Feng, N & Li, M 2011, ‘An information systems security risk assessment model under uncertain environment’, Applied Soft Computing, vol. 7, no. 11, pp. 4332-4340.

Halliday, S, Badenhorst, K. & Von Solms, R 1996, ‘A business approach to effective information technology risk analysis and management’, Information Management & Computer Security, vol. 1, no. 4, pp. 19-31.

Spears, J L & Barki, H 2010, ‘User participation in information systems security risk management’, MIS quarterly, vol. 3, no. 34, pp. 503-522

Stoneburner, G, Goguen, A. & Feringa, A 2002, ‘Risk management guide for information technology systems’, Nist special publication, vol. 30, no. 800, pp. 800-30.

Sun, L, Srivastava, R P & Mock, T J 2006, ‘An information systems security risk assessment model under the Dempster-Shafer theory of belief functions’, Journal of Management Information Systems, vol. 4, no. 22, pp. 109-142.

Read more

Technology-Using Teachers

Summary of the Case Study

The exploratory study was conducted by Ertmer, Gopalakrishnan, and Ross (2001) to compare exemplary technology use by teachers and the descriptions of the best practice described in the literature, underlining the significance of teachers’ personal beliefs. The researchers utilized qualitative design to analyze the distinctness in pedagogical views and practices among 17 exemplary technology-using teachers (Ertmer et al., 2001).

The study sets forward research questions interrogating the “characteristics of teachers who perceive themselves as exemplary technology users” as compared to the way the literature describes teacher use of technology (Ertmer et al., 2001, p. 5).

Also, this research concentrates on the comparison of the teachers’ points of view about the best practices and those portrayed in pedagogical literature. The analysis showed that the professional context and the personal belief paradigm influence the choice of approaches to the work with students in the classroom even if teachers consider themselves exemplary technology users. The way teachers use technology in the classroom differs from the implications presented in literature and, on the contrary, is caused by their perception and personal beliefs.

Analysis of the Case Study

The modern education process has faced a significant shift in the spectrum of tools used for the teaching process in the classroom. Due to technological advancement, educators consider technology to be an influential tool capable of enhancing the learning process (Liu, 2011). Many educational programs develop their curriculums following the latest advancements in technology encouraging teachers to use specific devices in their classroom work (Hew & Brush, 2007).

Despite all the literature underlying the techniques for the best educational practices, many scholars agree that the personal beliefs and views of teachers influence their approaches to the learning process, thus affecting the real-life effectiveness of the classroom work (Fullan, 2016; Guskey, 2000; Liu, 2011; Staub & Stern, 2002; Wilkinson et al., 2017). Therefore, the overall educational practice greatly depends on the personal and professional beliefs of teachers (Fullan, 2016). That is why the approach that a teacher uses in his or her work reflects his or her subjective professional preferences but not the knowledge received from literature sources.

Ertmer et al. (2001) view the teachers on an “instruction-construction continuum,” where professionals on the different ends of the continuum have different attitudes towards the approaches to teaching (p. 1).

Teachers utilizing the instructional perspective implement the existing techniques and knowledge used by others due to its perceived effectiveness accepted in practice. However, the educational professionals on the constructional side of the continuum concentrate on the individuality of a student and apply specific authentic tasks to facilitate the development of particular skills and knowledge. According to Ertmer et al. (2001), the use of technology in the classroom is more applicable to the teachers of a constructional type. Such teachers might utilize personally developed techniques which are not introduced in literature but are applied due to the pedagogical beliefs and experiences.

According to Staub and Stern (2002), who also refer to the constructivism theory as the basis of contemporary pedagogy, any teaching process depends on the teacher’s perspective which allows for situational control of the educational process for the results’ achievement necessary for a particular student. Indeed, the use of any means of learning, including technology, is determined by the subjective choice of a professional working with the students.

Tondeur, van Braak, Ertmer, and Ottenbreit-Leftwich (2017) state that teachers do not blindly adhere to the literature but adjust the curricular activities and tools to the requirements of a particular class. Although available devices enable the use of multiple approaches to teaching and learning, thus enhancing the applicability of teachers’ beliefs to the educational process, teachers tend to use more traditional methods.

Consequently, the professional development of teachers using technologies in the class should utilize multiple methods in their everyday pedagogical work. To enhance effectiveness, these methods have to be based on teachers’ beliefs and experiences with a priority set on students’ excellence. As Guskey (2000) states, the professional development of teachers should be an ongoing process especially in the demanding world of rapid technological advancement.

Technology-related professional development of educators has to be constructed based on interpersonal experience exchange due to the variety of approaches caused by personal judgment teachers accumulate in real-life classroom activities (Wilkinson et al., 2017). It is vital to make a school learning process the one that aims at the positive knowledge and skills development rather than the blind utilization of literature.

In conclusion, the study by Ertmer et al. (2001) investigates the differences between the exemplary practices of technology use as they are portrayed in literature and the ones carried out by teachers in classrooms. The findings show that the real educational situations that teachers have to deal with in their everyday jobs differ from the ones dictated by literature. This state of affairs imposes shifts in the techniques and educational approaches that depend on the personal beliefs and judgments of teachers. The constructional paradigm enables educators to apply technology more freely aiming at the best learning results. Teachers’ professional development should be adjusted to these findings to facilitate the learning process in schools.

References

Ertmer, P. A., Gopalakrishnan, S., & Ross, E. M. (2001). Technology-using teachers: Comparing perceptions of exemplary technology use to best practice. Journal of Research on Computing in Education, 33(5), 1-27.

Fullan, M. (2007). The new meaning of educational change. New York, NY: Teachers College Press.

Guskey, T. R. (2000). Evaluating Professional Development. Thousand Oaks, CA: Corwin Press.

Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educational Technology Research and Development, 55(3), 223-252.

Liu, S. H. (2011). Factors related to pedagogical beliefs of teachers and technology integration.” Computers & Education, 56, 1012-1022.

Staub, F. C., & Stern, E. (2002). The nature of teachers’ pedagogical Beliefs matters for students’ achievement gains: Quasi-experimental evidence from elementary mathematics. Journal of Educational Psychology, 94(2), 344-355.

Tondeur, J., van Braak, J., Ertmer, P. A., & Ottenbreit-Leftwich, A. (2017). Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educational Technology Research and Development, 65(3), 555-575.

Wilkinson, I., Reznitskaya, A., Bourdage, K., Oyler, J., Glina, M., Drewry, R., … Nelson, K. (2017). Toward a more dialogic pedagogy: Changing teachers’ beliefs and practices through professional development in language arts classrooms. Language and Education, 31(1), 65-82.

Read more

Cloud Computing and Virtualization Technologies

Introduction

Cloud computing refers to the use of the internet to deliver applications to users. These applications are delivered in the form of services. These services are referred to as Software as a Service (SaaS) (Illingworth, 1991). Cloud computing is also composed of data centers that are equipped with computer hardware and software.

The combination of all these components ensures that the services are available to the users without failure or delay. The cloud refers to both the hardware and the software that is housed at the data center. The cloud can either be public or private. It is termed as a public cloud when it is made available to the public and they pay for it depending on their usage. The service that is paid for is referred to as utility computing. Private clouds, on the other hand, are not accessed by the public. They are only used within the data centers (Katz, 2010).

Main Body

A combination of utility computing and Software as a Service form the entire cloud computing. Private clouds are not considered part of cloud computing but they play an important role in ensuring that cloud computing operates both effectively and efficiently. Virtualization, on the other hand, is a concept where there is a representation of the physical components in a form that is not tangible. This may include computer hardware, storage and even the operating system. This has enabled the use of virtual machines that act as host machines. The virtual machines are referred to as guest machines (Warschauer, 2011). The host machine runs its software which is different from that which is run on the guest machine. These advancements in technology have enabled easy software installation techniques and also provided an alternative way of storing information.

The use of virtualization and cloud computing has both advantages and disadvantages as far as the security of the systems is concerned (Bishop, 2003). They offer an alternative to the current ways in which software installation, storage and maintenance of data are carried out. Some of the weaknesses and strengths in security as far as virtualization and cloud computing are concerned are explained below. The layering structures in cloud computing are nontransparent. This means that a lot of data that may be viewed superficially as unrelated may be linked internally (Rich, 2009). This means that the dependency between data may be enormous. The result of this is the loss of the entire data, incase part of it is altered or deleted. The data may also be rendered inaccessible.

The storage of data is also a problem. This is because the host of the cloud has the sole control of where the data will be stored and how secure it will be. The host also has the ability to determine where the data will be stored. In case of failure in his/her security system, the data that have been stored will be under threat. The data may also be lost during transmission from the host machine to the guest machine (Dissanayake, 2011). This may be as a result of the interception by hackers during the transmission period.

Another weakness of virtualization and cloud computing is the lack of privacy and control. This is because the data are in the hands of a third party. In case the data is sensitive, its privacy may be under threat since a third party is involved. In such a case, sabotage can easily take place. This is a major factor because most people and corporate institutions value the privacy of their data. This has resulted in the fact that very few people use cloud computing (Chao, 2012).

The strengths of these technologies include the ability to back up data. The data recovery technique used is also better than those on the physical systems. The use of the data centers and the fact that data are stored in the cloud enable for easy backup of the data by the third party involved in its maintenance. This means that in case of data loss, the recovery of the data will be easily achievable. Another advantage is the device independence and portability.

Since the data are held in the cloud, access to it can be achieved from any location so long as there is an internet connection (Brennan, 2010). This is helpful since it enables communication with remote offices and accessibility of their data will also be easy. The scalability of cloud computing is the property that has led to an increase in trust in the system. This is where the platform can change in size depending on the demands of the users. This ensures that the required volumes of data can be accessed when they are required without any delays if the internet connection has been established.

The use of these technologies is seen to be on the increase. It is foreseen that in the future, their use will be inevitable. Experts foresee that the size of the data centers will tremendously increase. This will be coupled with an increase in the bandwidth that they would provide. It is believed that the application software will also undergo tremendous changes. It will be designed to run both on the client-side as well as the cloud side (Rudi, 2010).

It will also be designed in such a way that the software on the client-side will still be useful even when the client has disconnected from the cloud. The scaling of the cloud will also be made high to cater to the needs of the users efficiently. The infrastructure software will also be designed to entirely run on a virtual machine. This will ensure the elimination of physical machines. Measures will also be put in place to ensure that proper billing methods are devised the experts also foresee that the hardware will be developed in such a way that it is more energy conserving. This will necessitate the inclusion of idle states when no operation is taking place. The hardware is also designed to ensure that the virtual machines have low-level software running as the primary software on them.

The evolution of technology will pose challenges in the software life cycle. New methods of developing software will have to be established such that this software will not run only on a physical computer. The software will be designed such that it will be able to run on the virtual machines without the need for another operating system (Grembi, 2008). The development of the software will also be done in a way that each stage is well checked to ensure that no bugs are incorporated into the system. This is for the purpose of ensuring that the data will be secure at all times.

Recommendations

Cloud computing and virtualization are important technology. However, it has been clearly shown that it suffers some shortcomings. The effects of these shortcomings can be minimized. Firstly, the security of cloud computing is based on trust. There should be trust between the clients and the third party offering the service. Secondly, efforts should be made to ensure that standards that will govern the use of cloud computing are developed. This should be after extensive consultation and research. This will ensure that the requirements are well stipulated and that the services that are offered are to the required standards.

References

Bishop, M. (2003). Computer Security. Boston: Pearson.

Brennan, J. E. (2010). The Future of Analytics. Boston: Song Bird Hill Media.

Chao, L. (2012). Cloud Computing for Teaching and Learning. Victoria: IGI Global.

Dissanayake, A. (2011). Your Cloud Computing Companion. Alaska: Asokaplus.

Grembi, J. (2008). Secure Software Development. Boston: Cengage Learning.

Illingworth, V. (1991). Dictionary of Computing. Oxford: Oxford University Press.

Katz, R. N. (2010). The Tower and the Cloud. London: Macmillan.

Rich, J. R. (2009). How to Do Everything iCloud. New York: McGraw-Hill.

Rudi, V. (2010). Society and Technological Change. Boston: Worth Publishers.

Warschauer, M. (2011). Learning in the Cloud. Boston: Teachers College Press.

Read more

Transmission System Development

Introduction Background

The conversion of torque into rotary motion for transmission to the differential unit is the subject of much research and innovative engineers for many years (Parker, 1999; Lechner & Naunheimer, 1999; Zhang & Liu & Wen, 2007).

A vehicle’s transmission system is important because it converts torque into circular motion that is used to propel the vehicle. However, not much research has been done on how transmission systems technology has evolved, creating a gap in knowledge that the proposed study intends to cover (Suh, 1990). This study will focus on the historical development of transmission systems beginning with Watt’s gearbox and transmission system that was designed using a constant mesh engagement, to the present day automatic transmission systems (Lechner & Naunheimer, 1999; Naunheimer, Bertsche, Ryborz & Novak, 2010; Navet, Song, Simonot-Lion & Wilwert, 2005).

The category of vehicles under investigation includes land vehicles, vessels, and aircrafts. However, the study will investigate the types of transmission systems that are used to propel motor vehicles in the automotive industry. The two types of systems in use include z-speed transmission (geared transmission with z speeds) and continuous variable transmission systems (Crolla, 2009).

Scope

The scope of this paper covers the history of transmission systems that will include the fundamental innovations that have occurred in the development of transmission systems, how the drive units were developed, the phases of development of drive units, how vehicle transmissions were developed, and an investigation of the transmission phenomena, which includes the loss of power and the efficiency of the transmission systems in converting torque into the rotary motion for vehicle propulsion.

Statement of the problem

The development and innovations of transmission systems have not been discussed in academic literature, creating a gap on the importance of innovation in mechanical engineering to be filled by this study. To provide a solution to the problem, the study will be guided by three research objectives.

Objectives

  • To investigate the history of transmission systems
  • To conduct a study in transmission design trends
  • Investigate the design of the gearwheel transmission of the vehicles

Research questions

  • What is the historical trend of the development of transmission system?
  • What are the design trends of transmission systems?
  • How is the gearwheel transmission of the vehicles designed?

Discussion

The articles and books on the internet, online databases, and libraries on transmission systems will be used to write this paper. The proposed format of the paper includes the title page, the introduction section, which consists of the background discussion of transmission systems, the problem statement, objectives, research questions, and the significance of the study. The problem statement provides a brief description of the problem that was researched on in this paper (Wang, Li & Peng, 2003).

  • History of the transmission systems

    • Type of innovations done on transmission systems
    • How the vehicle drive units were developed
    • The development phases of the transmission systems
    • How the gear toothed systems were developed
    • How torque converters were developed
    • The phenomena of transmission systems, efficiency and power losses during power transmission
  • Transmission design trends

    • Vehicle basic design principles
    • Arrangement of the transmission system in the vehicle
    • Transmission formats and designs of other components
  • The design principles of the gearwheel transmission.

Results

Statement of Work: the tasks and the time required to write the paper are shown on table 2.

Resources

Personnel

I decided to write on a topic that is relevant to my mechanical engineering discipline that discusses on the development of transmission systems to show how innovation, one of the most important tools in modern engineering works.

Facilities and Equipment-The following equipment will be necessary to complete the study.

  • Laptop
  • Printer
  • Printing papers
  • Access to various online libraries and the Internet.

Costs

Fiscal: The items are important because they will contrite to the success of the paper

Table 1: Items budget.

ITEM BUUDGET (IN $)
Stationery 250
Internet Access 100
Payment for database access 100
Data cleaning 50
Data entry 150
Analysis 200
Printing, photocopying and Binding 500
Miscellaneous 100
Total 1350
Tasks and time required.
Table 2: Tasks and time required.

Conclusion

Summary

The proposed study is to investigate the development of transmission systems from 1884 when the constant mesh gearbox was developed until today when automated systems have replaced manual systems. The prosed study will cover the development and design of the transmission system that are used in the propulsion of vehicles investigate the efficiency of the transmission systems in converting torque into the rotary motion used to move vehicles.

Annotated Bibliography

Crolla, D. (Ed.). (2009). Automotive Engineering e-Mega Reference. Butterworth- Heinemann. Web.

This book provides a detailed discussion of Watt’s discovery and how he proposed to use the variable speed gearbox. I intend to use this book to demonstrate how power can be transmitted using a variable speed gearbox with dog clutch engagement and the constant mesh gearbox. The book discusses how the need to adapt the power of the steam engine for the intended use, which underpinned the evolutionary development of the transmission systems. It also talks about the importance of the dog clutch as one of the components necessary to transmit power from the engine to the differential units.

Lechner, G., & Naunheimer, H. (1999). Automotive transmissions: fundamentals, selection, design and application. Springer Science & Business Media. Web.

This book is on different transmission systems and focuses on automotive transmission systems. I plan to use this as a resource to provide a detailed discussion of the categories of vehicles and their transmission systems. The books classifies into land vehicles, vessels, and aircrafts. In intend to use the books to show how z-speed transmission systems and continuous variable transmission systems work and how they have been developed so far by discussing in detail on constant mesh, synchromesh, and fully automatic transmission systems. In addition, the transmission systems operate the principles of power interruption when shifting gears while others operate on the principle of continuous power supply.

On the other hand, I intend to use the books to show how the vehicle engine speed can be adopted using the drive units to power output required to propel the vehicle forward.

The other areas that I will use in the study in this book include the stages in the development of the transmission system. The book provides detailed discussion on z-speed geared transmission systems and how they work, the fully automatic z-speed gearbox, and mechanical and hydraulic transmission systems. It shows the evolution of the transmission system from the past to the present models.

Naunheimer, H., Bertsche, B., Ryborz, J., & Novak, W. (2010). Automotive transmissions: Fundamentals, selection, design and application. Springer Science & Business Media. Web.

The book is a valuable resource on the development of transmission systems and the components such as the gear-toothed system. I intend to use this book to discuss the development of components such as the bearings, shafts, and gear wheels. In addition, I will be able to show the historical development of the gear wheel and the materials used to make the gearwheels. The books classify the history and development of transmission systems into four sections. The first part was between 1784 and 1884, which discusses how the power produced by the internal combustion engine could be captured by using the constant mesh gearbox for the intended use.

The next phase occurred between 1884 and 1914, which provides a detailed discussion of the speed/torque conversion principle. Here, the third phase occurred between 1914 and 1980 when the popularity of geared transmission systems were high because the power of the internal combustion engine had increased significantly making the need better and more efficient geared transmission very important.

Navet, N., Song, Y., Simonot-Lion, F., & Wilwert, C. (2005). Trends in automotive communication systems. Proceedings of the IEEE, 93(6), 1204-1223. Web.

This article is on the transmission trends that have occurred in the mechanical engineering field on the development of transmission systems. I intend to use this resource to express my understanding of why vehicles need gearboxes and the role of the gearboxes on the transmission systems. The article discusses the interrelationship between power transmission ratio and torque produced by the engine. The article will enable used as a source of knowledge on the fundamental performance features of transmission systems and the development process of transmission systems used in automotive engineering.

Parker, M. (1999). The History of Manual Transmissions. Web.

This paper provides information on how block transmission systems emerged and evolved to the modern day gearbox. I plan to use this resource to investigate how power is transmitted from the clutch into the gearbox and to the propeller shaft. The paper provides the background theories leading to innovation and development of transmission systems for different vehicles. The article discusses the reliability issues such as the service life of the transmission system, economic issues, and the ease of operation of the transmissions system for road safety.

Suh, N. P. (1990). The principles of design. New York: Oxford University Press. Web.

This book provides detailed discussion on the best principles that are followed in the design and development of gearboxes and other power transmission systems. I intend to use this as an important resource to demonstrate the changes that have occurred since the first gearbox was designed.

Wang, J., Li, R., & Peng, X. (2003). Survey of nonlinear vibration of gear transmission systems. Applied Mechanics Reviews, 56(3), 309-329. Web.

This article is very important because it provides the best methods that have been adopted for use today on the design and development of transmission systems with a specialised emphasis on the gearbox transmission system. I intend to use this resource to show the changes that have occurred and how they have been implemented in the design and development of gearboxes for the transmission of power from the clutch to the differential unit.

Zhang, Y., Liu, Q., & Wen, B. (2007). Reliability-based sensitivity design of gearpairs. 12th IFToMM World Congress, Besançon (France). Web.

Typically, this article provides a discussion of the sensitivity if the gears and the factors to consider when designing them. I will use this article to show how innovation and development in mechanical engineering has led to the development of new gear boxes that are lighter, faster, and more efficient when used to transmit power from the engine to the propeller shaft.

Read more

Data Processing: Systems, Applications and Products

Introduction

The Systems, Applications, and Products (SAP) for data processing are software made by a German software vendor to prepare suitable business models (Gillot, 2008, P.279). ‘The SAP Company founded in 1972 is headquartered in Walldorf, Germany, and it is the largest global business software company ‘(SAP, 2011).

With more than 53,000 employees at various departments in more than 50 countries worldwide, the company suits itself in the all-important global markets through a global network of SAP Labs from Bulgaria, Canada, China, Germany, Hungary, India, Israel, and the United States, enhancing SAP to have a local presence, while organized globally (SAP, 2011). Moreover, the company seeks to empower other companies of all sizes and industries to run better by running on a platform of service-oriented architecture (SOA), SAP Netweaver technology, and in-memory computing, all of which enable it to offer companies great flexibility and responsiveness.

SAP has various application areas including workflow, sales and distribution, production planning, human resource management, financial management, assets and material management, plant maintenance, quality management, industry solution, controlling and project implementation. The company’s major business is selling licenses for software solutions and related services including consulting, maintenance, and training services for software solutions (SAP, 2011). The software solutions are for standard business processes and technologies; but there are also specific industry-based solutions crafted to aid companies in improving their business processes to be more effective, efficient and allow for advancements.

The Product and Services features and benefits derived from enterprise and SME

Products

The SAP product portfolio contains the following key software applications: SAP Business Suite, SAP Business All-in-One solutions, SAP BusinessObjects portfolio, SAP solutions for sustainability and the SAP NetWeaver technology platform. Generally, the SAP Business Suite empowers businesses to create, customize, carryout and control business activities and processes to suit them to new strategies and IT processes.

SAP’s Business Suite consists of five enterprise applications including “SAP Enterprise Resource Planning (ERP), SAP Customer Relationship Management (CRM), SAP Product Lifecycle Management (PLM), SAP Supply Chain Management (SCM)and SAP Supplier Relationship Management (SRM)” (SAP, 2010). The SAP Business Suite applications ensure that there is rapid improvement of transparency in all business units and information silos within them, thus increasing capacity to make accurate business decisions and reducing inconsistencies. They provide a platform through which one can effectively organize the various aspect of a company such as a cost, raw inputs, labour, processes, and marketing.

The SAP Business All-in-One solution mostly handle the needs of small businesses and medium-sized companies looking for an all configurable solution to aid in the running of the business(SAP, 2011). On the other hand, the SAP Business Objects portfolio handles the needs of small to large companies. The large enterprises benefit through the following solutions from this application: first, the business intelligence solutions enable users to make applicable, informed decisions based on data and analysis (SAP, 2011).

Secondly, the enterprise performance management solution improves the company’s image and competitiveness through positioning, transparency, and improved confidence. Thirdly, the governance, risk, and compliance solutions enhance a steady balance of risk and opportunities within the business. Lastly, analytic applications enable business users to make a better decision and increase competitiveness. The SME gains from interactive, open business intelligence and performance management solutions for midsize companies and free software trials (SAP, 2011).

The SAP solution for sustainability enables enterprises to carefully manage their sustainability strategies and improve business performance (SAP, 2011). The SAP NetWeaver technology platform enables enterprises to integrate information, business processes, and technologies by laying the foundation of the SAP applications.

Services

The SAP Company offers consulting maintenance, and training services for its software solutions through the following organizations: SAP Consulting, which has a global team with diverse expertise that can help enterprises successfully implement business processes and technological changes. Secondly, SAP Active Global Support (SAP AGS) enables enterprises to effectively manage SAP and non-SAP applications improving integration, and reducing losses.

Thirdly, SAP Education enables enterprises to get high quality on-site or online courses provided simultaneously with other SAP products and training services. Thus, enterprises leap the benefits of being updated and more knowledgeable in their fields. Lastly, SAP Custom Development enables enterprises to make a wide range of customization to their processes and applications, thus improving flexibility in their businesses.

Conclusion

Through three-tier architecture, the vast ability for componentization and customization provided by SAP, every company using SAP stands to gain greatly in its processes and its future advances and updates. SAP, though expensive to acquire and maintain, provides companies with ultimate capabilities to handle all their process and solve problems arising using the best system for integration of technology systems and IT strategies (Anderson, Nilson & Rhodes, 2009, P.36).

References List

Anderson, G., Nilson, C. &Rhodes, T., 2009. SAP Implementation Unleashed: A Business and Technical Roadmap to Deploying SAP. NY: Sams Publishing.

Gillot, J., 2008. The Complete Guide to Business Process Management. NY: Lulu.com. (Online). Web.

SAP. 2010. SAP Website. (Online). Web.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp