Wired and Wireless Networks

Wired and Wireless, What’s the Difference? The main difference between a wired and wireless data communication infrastructure is the existence of physical cabling. The same or similar techniques are employed for both types of data communication infrastructure in terms of the core elements of essential network services. The basic difference between a wired and a wireless network is self-explanatory. A wired network uses wires to communicate whereas a wireless network uses radio waves.

Another difference and how one technology gets an edge over the other. Wired networks are easy to set up and troubleshoot where wireless networks are comparatively difficult to set up, maintain, and troubleshoot. Wired networks make you immobile while wireless ones provide you with convenience of movement. A third difference, wired networks prove expensive when covering a large area because of the wiring and cabling while wireless networks do not involve this cost. Wired networks have better transmission speeds than wireless ones.

In a wired network, user does not have to share space with other users and thus gets dedicated speeds while in wireless networks, the same connection may be shared by multiple users. One of the most common questions we have to answer on a daily basis is the difference between wired and wireless networks. Wired is the communication between two devices via cables. Wireless is the communication between two devices without cables. Now, is it that simple? Each method of networking has its own pros and cons. Wireless networks do not use any form of cable.

The transmission of data occurs over radio waves just like cordless phones or the Bluetooth headset that came with that phone you purchased . There are many advantages, but the major advantage of having a wireless device is the mobility and freedom that comes with it. There is less clutter and fewer wires to worry about. But, you sacrifice in most cases on speed and security. Wired networks on the other hand have been around for some time now. Officially known today as the Ethernet, the cables usually connect these devices using CAT5 cables.

The speed and security in this scenario are greatly enhanced. The latest Ethernet routers can support up to 1000Mb/s or a Gigabit/second, that’s ten times faster than the widely used 100 Mb/s router. However the overall cost of a wired network is lower and provides high performance and better security than wireless networks. As home users, wireless networks have become the choice. A wireless network saves your time and efforts in installing a lot of cables. Also, if you need to relocate a client machine in your office, you only need to move the computer with wireless network.

Wireless networking is very useful in the public places, libraries, hotels, schools, airports, train stations. A drawback in the wireless internet is that quality of service, it is not guaranteed if there is any interference. Then the connection may be dropped. Wireless local area networks allow users in local area, such as in a university or a library to join a network and gain wireless access to the internet. A temporary network can be formed by a small number of users without the need of access points.

Service Set Identifier acts a simple password by allowing WLAN network to be split up into different networks and each having a unique identifier. These identifiers are configured in multiple access points. To access any of the networks, a computer is configured in such a way that each is having a corresponding identifier for that network. If they match between the two computers or networks then access is granted. This is a good security method but it is mainly involved in the small wireless networks because there is more manual work is involved, entering the MAC address into the Access point.

Wireless networking is very popular in home networking and more than 20 percent of homes with broadband internet are using wireless networks and this number is increasing. In a general estimate worldwide hotspots have now reached more than 30,000 and will grow about 210,000 in the next few years. Most large hotels already offer Wi-Fi and the business travelers are willing to pay for the wireless access. 802. 11 is the next Wi-Fi speed standard. It is set to offer bandwidth around 108Mbps and is still under development. With the speed of 70 Mbps and a range up to 30 miles, the 802. 1 standard, known as Wimax is sure to give a boost to wireless networking. The term wireless networking refers to technology that enables two or more computers to communicate using standard network protocols, but without network cabling. Any technology that does this could be called wireless networking. This technology, fueled by the emergence of cross-vendor industry standards such as IEEE 802. 11, has produced a number of affordable wireless solutions that are growing in popularity with business and schools as well as sophisticated applications where network wiring is impossible, such as in warehousing or point-of-sale handheld equipment.

An ad-hoc, or peer-to-peer wireless network consists of a number of computers each equipped with a wireless networking interface card. Each computer can communicate directly with all of the other wireless enabled computers. They can share files and printers this way, but may not be able to access wired LAN resources, unless one of the computers acts as a bridge to the wired LAN using special software. Each computer with a wireless interface can communicate directly with all of the others. A wireless network can also use an access point, or base station.

In this type of network the access point acts like a hub, providing connectivity for the wireless computers. It can connect (or “bridge”) the wireless LAN to a wired LAN, allowing wireless computer access to LAN resources, such as file servers or existing Internet Connectivity. That’s the different between wired and wireless network. BIBLIOGRAPHY 1. http://computer. howstuffworks. com/home-network2. htm 2. http://en. wikipedia. org/wiki/Wireless_network 3. http://www. broadbandbuddy. com. au/wireless-broadband/wireless-networks-vs-wired-networks

Read more

LAN and Network Mangements

Imagine yourself as a network administrator, responsible for a 2000 user network. This network reaches from California to New York, and some branches over seas. In this situation, anything can, and usually does go wrong, but it would be your job as a system administrator to resolve the problem with it arises as quickly as possible. The last thing you would want is for your boss to call you up, asking why you haven”t done anything to fix the 2 major systems that have been down for several hours.

How do you explain to him that you didn”t even know about it? Would you even want to tell him that? So now, picture yourself in the same situation, only this time, you were using a network monitoring program. Sitting in front of a large screen displaying a map of the world, leaning back gently in your chair. A gentle warning tone sounds, and looking at your display, you see that California is now glowing a soft red in color, in place of the green glow just moments before. You select the state of California, and it zooms in for a closer look.

You see a network diagram overview of all the computers your company has within California. Two systems are flashing, with an X on top of them indicating that they are experiencing problems. Tagging the two systems, you press enter, and with a flash, the screen displays all the statitics of the two systems, including anything they might have in common causing the problem. Seeing that both systems are linked to the same card of a network switch, you pick up the phone and give that branch office a call, notifying them not only that they have a problem, but how to fix it as well.

Early in the days of computers, a central computer (called a mainframe) was connected to a bunch of dumb terminals using a standard copper wire. Not much thought was put into how this was done because there was only one way to do it: they ere either connected, or they weren”t. Figure 1 shows a diagram of these early systems. If something went wrong with this type of system, it was fairly easy to troubleshoot, the blame almost always fell on the mainframe system.

Shortly after the introduction of Personal Computers (PC), came Local Area Networks (LANS), forever changing the way in which we look at networked systems. LANS originally consisted of just PC”s connected into groups of computers, but soon after, there came a need to connect those individual LANS together forming what is known as a Wide Area Network, or WAN, the result was a complex connection of omputers joined together using various types of interfaces and protocols. Figure 2 shows a modern day WAN.

Last year, a survey of Fortune 500 companies showed that 15% of their total computer budget, 1. 6 Million dollars, was spent on network management (Rose, 115). Because of this, much attention has focused on two families of network management protocols: The Simple Network Management Protocol (SNMP), which comes from a de facto standards based background of TCP/IP communication, and the Common Management Information Protocol (CMIP), which derives from a de jure standards-based background associated with the Open Systems Interconnection (OSI) (Fisher, 183).

In this report I will cover advantages and disadvantages of both Common Management Information Protocol (CMIP) and Simple Network Management Protocol (SNMP). , as well as discuss a new protocol for the future. I will also give some good reasons supporting why I believe that SNMP is a protocol that all network SNMP is a protocol that enables a management station to configure, monitor, and receive trap (alarm) messages from network devices. (Feit, 12). It is formally specified in a series of related Request for Comment (RFC) documents, listed here.

The first protocol developed was the Simple Network Management Protocol (SNMP). It was commonly considered to be a quickly designed “band-aid” solution to internetwork management difficulties while other, larger and better protocols were being designed. (Miller, 46). However, no better choice became available, and SNMP soon became the network management protocol of choice. It works very simply (as the name suggests): it exchanges network packets through messages (known as protocol data units (PDU)). The PDU contains variables that have both titles and values.

There are five types of PDU”s which SNMP uses to onitor a network: two deal with reading terminal data, two with setting terminal data, and one called the trap, used for monitoring network events, such as terminal start-ups By far the largest advantage of SNMP over CMIP is that its design is simple, so it is as easy to use on a small network as well as on a large one, with ease of setup, and lack of stress on system resources. Also, the simple design makes it simple for the user to program system variables that they would like to monitor.

Another major advantage to SNMP is that is in wide use today around the world. Because of it”s evelopment during a time when no other protocol of this type existed, it became very popular, and is a built in protocol supported by most major vendors of networking hardware, such as hubs, bridges, and routers, as well as majoring operating systems. It has even been put to use inside the Coca-Cola machines at Stanford University, in Palo Alto, California (Borsook, 48). Because of SNMP”s smaller size, it has even been implemented in such devices as toasters, compact disc players, and battery-operated barking dogs.

In the 1990 Interop show, John Romkey, vice president of engineering or Epilogue, demonstrated that through an SNMP program running on a PC, you could control a standard toaster through a network (Miller, 57). SNMP is by no means a perfect network manager. But because of it”s simple design, these flaws can be fixed. The first problem realized by most companies is that there are some rather large security problems related with SNMP. Any decent hacker can easily access SNMP information, giving them any information about the network, and also the ability to potentially shut down systems on the network.

The latest version of SNMP, called SNMPv2, has added some security measures that were left out of SNMP, to combat the 3 largest problems plaguing SNMP: Privacy of Data (to prevent intruders from gaining access to information carried along the network), authentication (to prevent intruders from sending false data across the network), and access control (which restricts access of particular variables to certain users, thus removing the possibility of a user accidentally crashing the network). (Stallings, 213) The largest problem with SNMP, ironically enough, is the same thing that made it great; it”s simple design.

Because it is so simple, the information it deals with is either detailed, nor well organized enough to deal with the growing networks of the This is mainly due to the quick creation of SNMP, because it was never designed to be the network management protocol of the 1990″s. Like the previous flaw, this one too has been corrected with the new version, SNMPv2. This new version allows for more in-detail specification of variables, including the use of the table data structure for easier data retrieval. Also added are two new PDU”s that are used to manipulate the tabled objects.

In fact, so many new features have been added that the formal pecifications for SNMP have expanded from 36 pages (with v1) to 416 pages with SNMPv2. (Stallings, 153) Some people might say that SNMPv2 has lost the simplicity, but the truth is that the changes were necessary, and could not have been avoided. A management station relies on the agent at a device to retrieve or update the information at the device. The information is viewed as a logical database, called a Management Information Base, or MIB. MIB modules describe MIB variables for a large variety of device types, computer hardware, and software components.

The original MIB for Managing a TCP/IP internet (now called MIB-I) was defined in RFC 066 in August of 1988. It was updated in RFC 1156 in May of 1990. The MIB-II version published in RFC 1213 in May of 1991, contained some improvements, and has proved that it can do a good job of meeting basic TCP/IP management needs. MIB-II added many useful variables missing from MIB-I (Feit, 85). MIB files are common variables used not only by SNMP, but CMIP as well. In the late 1980″s a project began, funded by governments, and large corporations.

Common Management Information Protocol (CMIP) was born. Many thought that because of it”s nearly infinite development budget, that it would quickly become in idespread use, and overthrow SNMP from it”s throne. Unfortunately, problems with its implementation have delayed its use, and it is now only available in limited form from developers themselves. (SNMP, Part 2 of 2, III. 40. ) CMIP was designed to be better than SNMP in every way by repairing all flaws, and expanding on what was good about it, making it a bigger and more detailed network manager.

It”s design is similar to SNMP, where PDU”s are used as variables to monitor the network. CMIP however contains 11 types of PDU”s (compared to SNMP”s 5). In CMIP, the variables are seen as very complex and sophisticated data tructures with three attributes. These include: 1) Variable attributes: which represent the variables characteristics (its data 2) variable behaviors: what actions of that variable can be triggered. 3) Notifications: the variable generates an event report whenever a specified event occurs (eg.

A terminal shutdown would cause a variable notification As a comparison, SNMP only employs variable properties from one and three above. The biggest feature of the CMIP protocol is that its variables not only relay information to and from the terminal (as in SNMP) , but they can also be used to perform tasks that would be impossible under SNMP. For instance, if a terminal on a network cannot reach the fileserver a pre-determined amount of times, then CMIP can notify appropriate personnel of the event.

With SNMP however, a user would have to specifically tell it to keep track of unsuccessful attempts to reach the server, and then what to do when that variable reaches a limit. CMIP therefore results in a more efficient management system, and less work is required from the user to keep updated on the status of the network. CMIP also contains the security measures left out by SNMP. Because of the large development budget, when it becomes available, CMIP ill be widely used by the government, and the corporations that funded it.

After reading the above paragraph, you might wonder why, if CMIP is this wonderful, is it not being used already? (after all, it had been in development for nearly 10 years) The answer is that possibly CMIP”s only major disadvantage, is enough in my opinion to render it useless. CMIP requires about ten times the system resources that are needed for SNMP. In other words, very few systems in the world would able to handle a full implementation on CMIP without undergoing massive network modifications. This disadvantage has no inexpensive fix to it. For that reason, many believe CMIP is doomed to fail.

The other flaw in CMIP is that it is very difficult to program. Its complex nature requires so many different variables that only a few skilled programmers are able to use it to it”s full potential. Considering the above information, one can see that both management systems have their advantages and disadvantages. However the deciding factor between the two, lies with their implementation, for now, it is almost impossible to find a system with the necessary resources to support the CMIP model, even though it is superior to SNMP (v1 and v2) in both design and operation.

Many people believe that the growing power of modern systems will soon fit well with CMIP model, and might result in it”s widespread use, but I believe by the time that day comes, SNMP could very well have adapted itself to become what CMIP currently offers, and more. As we”ve seen with other products, once a technology achieves critical mass, and a substantial installed base, it”s quite difficult to convince users to rip it out and start fresh with an new and unproven technology (Borsook, 48). It is then recommend that SNMP be used in a situation where minimial security is needed, and SNMPv2 be used Borsook, Paulina.

Read more

Virtual Private Network

Faith, my best friend has been trying to get some online writing job. She found some good websites the only problem was her location; the services could not be offered in her country Kenya. She informed me about it and I just learned about VPN so I advised to use it.

So what’s a VPN?

VPN stands for Virtual Private Networks. It gives you online privacy and anonymity by creating a private network from a public Internet connection. VPNs mask your Internet protocol (IP) address so your online actions are virtually untraceable. Most important, VPN services establish secure and encrypted connections too.

How VPN protects your privacy?

VPNs essentially create a data tunnel between your local network and an exit node in another location, which could be thousands of miles away, making it seem as if you’re in another place. This benefit allows “online freedom” or the ability to access your favorite apps and websites from anywhere in the world.VPN providers.There are many choices when it comes to VPN providers. There are some VPN providers who offer free service and there are some who charge for VPN service.

Paid VPN providers offer robust gateways, proven security, free software and unmatched speed.VPN protocolsThe number of protocols and available security features has grown with time but the most common protocols are:PPTP-PPTP tunnels a point-to-point connection over the GRE protocol.It is strong and can be set up on every major OS but it is not the most secure.

L2TP/IPsec- It is more secure than PPTP and offers more features. L2TP/IPsec implements two protocols together to gain the best features of each; L2TP protocol creates a tunnel and IPsec provides a secure channel.

This makes an impressively secure package.Open VPN- OpenVPN is an SSL-based VPN that is gaining popularity. SSL is a mature encryption protocol and OpenVPN can run on a single UDP or TCP port.The software used is open source and freely available.That’s all for today for more inquiries on VPNs register on my email list for more info.

Read more

Convolutional Neural Network

Table of contents

Today Biometric recognition systems are gaining much acceptance and lots of popularity due to its wide application area. They are considered to be more secure compared to the traditional password based methods. Research is being done to improve the biometric security to tackle the risk and challenges from surroundings. Artificial Intelligence has played a significant role in biometric security. Convolutional Neural Network (CNN) belongs to AI family, has been designed to work a little like human brain but not exactly, handles the complexity and variations in facial images very effectively. This paper is going to focus on Artificial Intelligence, Machine Learning, Deep Learning and how a CNN carries out facial detection.

Introduction

The increasing demand of technology in each and every field of our lives has raised the risk of data security in parallel. From the very ancient time, man is putting his best effort to get his things secured. But today in this digital world, we are facing more problems due to impostors and other types of security hacks. Besides these, the curious human nature has always been trying to do something new and to cross the predefined boundaries. Intelligence is a by birth human quality but now a days, technology has made machines to think and behave like us to some extent.

This concept of manmade intelligence created by rigorous use of complex mathematical operations and searching algorithms is known as Artificial Intelligence (AI). When we saw the AI used in Hollywood movie TERMINATOR, we didn’t even imagine the concept of such a smart machine that could handle different situations.

But now, it seems impossible is going to be possible due to AI as it has opened the door of a completely new world of opportunities. Artificial intelligence is a branch of computer science aiming to make a computer, robot, or a software think intelligently, in the same manner the intelligent humans think and it has been proved very useful where traditional algorithmic solutions don’t work well.

We are using AI based applications everywhere in our day to day life, such as- spam filters in gmail account, plagiarism checker, Google’s intelligent prediction in web searching, suggestions on Facebook and Youtube and many more. The main purpose of designing AI system is to include the following areas:-PlanningLearningProblem SolvingPattern RecognitionSpeech/Facial RecognitionNatural language processingCreativity, and many more.

Neural networks and deep learning, a branch of AI currently provide the best methods to solve many problems associated with the Biometric authentication. Biometrics is a noble technique for personal authentication either on the basis of physical attribute (fingerprint, iris, face, palm, hand, DNA etc.) or behavioral (Speech, signature, keystroke etc.).

As we all know, our face is one of the wonderful creations of God and the unique diversities among all faces help us to differentiate one another. Facial recognition is the fastest growing field because a large no. of applications is adopting it. Recently, Apple launched its face recognition system equipped iPhone X on 12 Sept 2017 and it is claimed that it can identify the face in dark or even when owner has different hairstyle or look as well.

Apple says that the facial recognition cannot be spoofed by using a photograph or even a mask . Application areas of Facial Recognition- Facial biometric recognition is being popular due to its wide range of applications and it can easily be deployed and integrated anywhere if there is modern high definition camera. Some of the trending applications are-Many electronic devices are integrated with face biometric to eliminate the need of passwords and thus providing enhanced security and accessing method.

Facebook’s automatic facial detection feature recognizes our friends’ faces with pretty good accuracy and starts suggestion based on it.Criminal identification has become simpler by better recognition of facial image through CCTV surveillance. It may minimize traffic rule breaking and road accidents.Some universities use facial recognition system as a tool to monitor the attendance of the students so that the management cannot be fooled by letting students to sign in behalf of others.

ESG Management School in Paris is using facial recognition software in its online classes to make sure students aren’t slacking off. Using a software called Nestor, the webcam on a student’s computer will analyze eye movements and facial expressions to find out if he or she is paying attention during video lectures.

In our paper, we will focus on the need of facial recognition and how deep learning and neural networks have been a backbone for this technology.  Machine Learning (ML) and Deep Learning (DL): Machine learning is considered as subset of AI which uses statistical techniques and algorithms which make a machine capable of making decision or prediction by learning from the given data and adapt through experience.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

Deep learning is a subset of Machine learning where a machine has a higher level of recognition accuracy and aims to solve real world problems like image recognition, sound recognition, space exploration, weather forecasting and so many other automated applications. Here, the word ‘deep’ refers to the no. of layers in the network to accomplish a task. Deep learning methods use neural network architectures, very much like neurons in human brain, introducing a concept of Artificial Neural Network (ANN). 3) Concept of Artificial Neural Network in problem solving:- Today, automated systems have made our lives too easy and have replaced man in some places. But when we talk about ‘intelligence’, man will always be superior to machines because of their god gifted nervous system which is composed of billions of neurons.

These neurons are interconnected together and pass signals to one another which make the entire system to identify, classify and analyze things. Getting inspiration from biological neural network, the concept of ANN came into existence. The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as – “a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

This network does not have feedback loops as output of one layer becomes the input for other layers. Practically, in a Feed forward network, any prediction does not have to be affected with the previous predictions.Recurrent Neural Networks (RNN): This type of neural network allows feedback loop by transmitting signals not only in one direction, instead data flow is carried out from backward direction too, sometimes also known as FeedBack ANN.

In RNN, each neuron has its connection with others and how the flow of data is maintained, will be governed by its internal memory. The decision taken by RNN gets affected by the decision made by the network at previous. It means, the current output of a RNN depends on both the previous output as well as the current input.

On the basis of layering, there are two types of ANN- Single Layer Network- In this type of network, neurons on input layers are connected with the neurons present at the output layer and there is no layer in between these two layers. Multi Layer Network- This type of ANN consists of more than one layer in between input and output layer which are called hidden layers.

These hidden layers carry out computation by passing data from one layer to another. In this scheme, output from one layer becomes input for next layer and so on; finally output is obtained from output layer.(4) Convolutional Neural Network (CNN): A convolutional neural network (CNN) is a subset of deep learning and belongs to the category of multilayer, feed-forward artificial neural networks. One of the most promising areas where this technology is rapidly growing, is security.

It has been very helpful in monitoring suspicious banking transactions, as well as in video surveillance systems or CCTV.Figure 4: A typical CNN architecture .Besides input and output layers, CNN has many hidden layers in between which may be classified as-Convolutional Layer:- This layer performs the core operations of training and forms the basis of CNN.

Each layer has a single set of weights for all neurons and each neuron is responsible for processing a small part of the input space. Thus, the convolutional layer is just an image convolution of the previous layer, where the weights specify the convolution filter . Pooling Layer:- This layer also known as downsampling layer, is placed after the convolutional layer. Pooling layer is responsible for reducing the spatial size (Width x Height) of the Input Volume which will be passed to the next convolutional Layer.

Fully Connected Layer:- This layer connects each neuron on previous layer with all the neurons present on the next layer. Facial detection/Recognition using CNN:- A human brain sees multiple images in a day and is able to distinguish each one accurately without realizing how the processing is done.

But, there is a different case with machines because they have to recognize an image on the basis of learning. Facial detection is a method to identify a person or object based on their unique features and this process involves the detection and extraction of the face from the original image or video. After this, the face recognition takes place where different complex computer algorithms are used to recognize a face.

Here, we will understand the entire process of face detection and recognition. A face detection system involves two phases: Enrollment Phase. Face Detection. In this phase, several pictures of the same person is captured to whom the system should recognize as “known” with different facial expressions and head positions.

Feature Extraction- In this step, different feature measures are applied which can better describe a human face. There are different algorithms such as Principal Component Analysis (PCA), Haar Features, Local Binary Pattern (LBP) etc. available for the facial measurement. On the basis of these measurements, CNN is trained for learning in future. Storing in Database- All the extracted features are stored in a database so that they can be used further in identification process.

Face Detection- When an image is admitted for identification, It is checked that whether it matches with the captured and stored images from the database by using face detection algorithms. Pre-processing- Pre-processing is necessary to make an easier and smooth training phase.

The collected face images or video frames need to be passed through Pre-processing phase to eliminate the noise, blur, shadows, lighting and other unwanted factors. The final smooth image obtained so, will be passed to the next feature extraction phase.Feature Extraction- After Pre-processing phase, feature extraction is carried out by the CNN which was trained during Enrollment phase.

Recognition- This is the last step where a suitable classifier such as Nearest Neighbor, Bayesian classifier, Euclidean Distance classifier etc., can be chosen. This classifier compares the feature vector stored in the database with the query feature vector and finally the best matched face image comes as a recognition output.

Conclusion

Biometric verification/authentication is going to be deployed everywhere from government to private organizations in coming days. In this paper, we studied the relation among AI, ML, DL, ANN and CNN. We have also demonstrated the way CNN carries facial detection with improved accuracy. The field of AI has a wide spectrum and open for researchers. So, it aims to provide better result in biometric security in future.

References

  1. “You can stymie the iPhone X Face ID – but it takes some work”, Anick Jesdanun, https://phys.org/news/2017-10-stymie-iphone-id-.html
  2. “Entrepreneur India”, https://www.entrepreneur.com/slideshow/280493#2
  3. “What is Machine Learning? A definition” Luca Scagliarini, Marco Varone, http://www.expertsystem.com/machine-learning-definition/.
  4. “Artificial Intelligence-Neural Networks”, https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm.
  5. “Artificial neural network”, https://en.wikipedia.org/wiki/Artificial_neural_network.
  6. “Artificial Intelligence-Neural Networks”, https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm.
  7. “Artificial Intelligence-Neural Networks”, https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm.
  8. “Convolutional neural network”, https://en.wikipedia.org/wiki/Convolutional_neural_network.
  9. “Convolutional Neural Networks”, http://andrew.gibiansky.com/blog/machine-learning/convolutional-neural-networks/.
  10. “Face Recognition Using Neural Network: A Review”, Manisha M. Kasar, Debnath Bhattacharyya and Tai-hoon Kim, International Journal of Security and Its Applications, Vol. 10, No. 3 (2016), pp.81-100.

Read more

Dynamic Integration of Supply-Chain Network

The integration of suppliers into Fedex’s structure adds value to Fedex’s customers. As price competition ceases to be a global force supplier’s role will be to add value not just to reduce costs. Customers and suppliers will work together and form inter-Fedex’s teams, which will facilitate improved communication between Fedex and increase rate of learning. Benefits will be gained from effects of sharing mutual experience and knowledge, which will result in whole chain, becoming better aligned with final customer’s requirements and objectives.

Virtual Fedex’s Structure

The workforce for many Fedex will be decentralised and even home-based via interactive networks. ‘Why pay for an office that is available for 160 hours per week when it is being used for only 20-40 hours per week? ‘. ‘ The merging of information technology and telecommunications has seen a revolution in our society, which will continue and allow for communication from any number of locations. This virtual Fedex will see managers, computer processors, journalists, consultants, designers and many others, all communicating from their own homes and from their customer’s premises.

Such areas as technical, research, marketing and information technology functions could be relocated offsite. We shall see emergence of virtual team using teleconferencing through laptop computers and other devices. The successful Fedex’s focus will shift from a control-based to a trust-based system through dedicated, trustworthy and loyal employees. Technological innovation Ground breaking technology will transform many functions of Fedex. Fedex will need to be dynamic, flexible and cherish impermanence and thrive on chaos.

Technology will dramatically change way we communicate, work and socialise. Technological innovation will also improve work processes and accommodate horizontal workflows by providing cross-functional information flows and performance feedback.  People come first Instead of producing moribund Fedex’s personnel by forcing individual to comply with tightly defined corporate norms, companies must find ways to encourage creativity and to nurture and utilise each employee’s unique knowledge and capabilities.

World-class Fedex will increasingly treat their employees as their most important assets. Despite pervasive influence of technological innovation most successful enterprises will be ones with quickest reactions, innovative management and best people. Team-based Fedex’s structure Participative management through teams will increasingly replace hierarchical structures of today. High-performance teams will manage their direct environment and be instrumental in setting relevant Fedex’s goals. Clearly communicated vision and objectives

Fedex will need to be tightly focused and highly specialised. The emphasis will be on distinguishing core capabilities, supporting core processes and all other activities will be outsourced. Fedex will need a strong purpose and vision and be focused on its core values in order to make work, meaningful and attract, motivate and retain outstanding people. Fedex’s purpose will be more than just increasing profit or market share; it will reflect an ongoing commitment to adding value to employees, customers and wider community.

Authentic leadership will relate to initiating and maintaining momentum in process improvement and will increasingly concentrate on formulating and implementing Fedex-wide strategies. This strategic intent will include attempts to redefine industries, to break rules and to focus on medium to long term. Visionary leadership will be assessed not in terms of charisma but by its success in building on strength of Fedex by preserving core values and by stimulating progress towards trust based relationships within and without Fedex.

Culture and Leadership Fred Smith, creative leader of Fedex, instilled that wherever business is conducted, use of Fedex’s core values is an important ingredient to success. Under Smith’s direction, Fedex has become a major technology user. The use of IT to its business enabled Fedex to surpass rest of industry and acknowledge Fred Smith as “visionary who forced his and other companies to think outside proverbial one. ” (Mintzberg, 2000, 89-96)

Smith’s objective was to outsmart his competitors and attempt to gain a competitive advantage. He rationed that company “should acquire its own transportation fleet while competitors were buying space on commercial airlines and sub-contracting their shipments to third parties. ” Even though Fedex did not see any profit until 1976, it earned reputation of being “absolutely, positively” reliable on its overnight delivery commitments, “an image that has become fundamental to Fedex’s overall success. “

The introduction of new technology allowed Fedex to install more than 100,000 sets of PC’s with its own software allowing customers to be linked and logged into their ordering and tracking system in early and mid 1980’s. The emergence of PC’s loaded with Fedex software transformed customer base into an electronic network. This was more important because computers were still uncommon and expensive, so use of this type of program seemed radical. “Smith’s vision, well before commercial launch of Internet, was that information about package is just as important as package itself.

” “Information enables corporate customers to tighten their order-to-delivery cycle, exercise just-in-time (JIT) inventory management, and synchronize production levels to market demand. ” (Wit and Meyer, 2004, 65) Employee performance is something Smith firmly believes in and is set in providing as much information as necessary to all of his employees for them to perform their jobs in an efficient manner. “Fedex’s quality of service became synonymous with quality of information provided to its workforce.

The “People-Service-Profit” philosophy was exactly what Smith wanted to portray to his employees, company, and competitors. “Fedex was first transportation company to install computer terminals into all Fedex vehicles, and to issue hand-held barcode scanner systems to its drivers so that real-time information on package status would be available to customers. ” The application of these changed way Fedex employees processed and gathered information. (Mintzberg, 2000, 89-96) Using Internet was another stage that Fedex felt could increase their production and service.

In November 1994, Fedex launched a Website that included package-tracking capabilities. Jim Barksdale, former CIO and COO of Fedex, and then CEO of Netscape, says, “It was first outward and visible demonstration of a practical, productive use of Internet by a real business for a real business purpose. ” One of most important contributions to Internet’s formative years was Smith’s appreciation for technology.  The creation of Internet meant Fedex could build one-to-one relationships with its customers.

The corporate culture of Fedex was based on superior customer service and displayed an attitude of “doing whatever it takes to serve customers” from top to bottom. The expansion of Internet, therefore, was something Fedex could use to enhance its customer base and create a competitive service advantage. “It allowed Fedex to not only let its customers pull real time information and data into their internal systems, but also to become more involved in internal processes of its customers. ” “Smith’s vision and leadership has been a major contributing factor in transforming Fedex into an E-business.

Although there was no consciously planned strategy to build an E-business, decisions that company made to align organization structure with systems and processes has carved out a model for building a successful business for twenty-first century. ” Under Smith’s leadership, core of Fedex’s strategy has been to “use IT to help customers take advantage of international markets. ” However, of greater significance is its “information super highway”, which lends support to transportation logistics efficiency as well as selling and supply chain logistics solutions management.

Read more

Networking sites boon to the youth

Face book is social networking site from which we can get a plenty of knowledge. We can say that it s a treasure of knowledge. One can get a plenty ot information from it. One can enhance his or her knowledge by coming in contact with Intellectual around the world. One can clear the doubts and queries about any subject from the scholars that are available on the Social networking sites. He can get the best tips for any subject from the people who re present around the world.

Some social networking sites are also useful in the job opportunities. One can easily get the job of their requirement. Social networking is particularly vital for entrepreneurs. Selfemployed can find contacts via professional groups on LinkedIn and Twitter. while business owners can use Face book and Twitter to market their products and services. Face book has a range of services designed for businesses marketing themselves more effectively, Social networking sites are the best means of entertainment, We can also watch videos of our interest n social networking sites.

Social networking sites are the best means to propagate our religion and culture. We can share our views on our religion. We can get the knowledge from religious scholars present around the world, We can also make people aware Of environmental issues that are happening around the world which iS very important in todays life and We can protect Our environment by increasing awareness among the people. At last would like to say that social Sites are a boon to the young generations and can add morals to their life if used in a proper manner _

Read more

Introduction to Networking Test Questions

What type of network will they be implementing to connect their two offices? A. LAN b. internetwork c. MAN d. SAN 2. What was the primary reason to create a network? b. d. share resources communicate with e-mail share information 3. You’re the network administrator for a company located in Arizona that has Just opened an office in Texas. You need to make sure that the two locations can communicate. What type of network are you implementing? MAN b. WAN c. internetwork d. extended LAN 4. You have Just started a new business.

You need to have three to four workstations vailable for your employees who simply need to share some files and a printer, but you dont have a large budget. Security is not a major concern, but costs are. What type of network would be the most appropriate for your situation? internetwork domain peer-to-peernetwork server-based network 5. What is a policy that defines the methods involved when a user logs on to the network called? a. audit b. security c. a uthentication d. acceptable use ‘2-l$age 5’ Which one of the following passwords meets Windows password complexity requirement?

NetWoRKing NetworkinglsFun N3tworkinglOl netw@rklngb@slcs 7′ What is a type of malware that is so difficult to detect and remove that most experts agree that it is better to backup your critical data and reinstall the OS? rootkit Trojan hoax virus spyware 8. When a frame is received, which component reads the source and destination MAC addresses, looks up the destination to determine where to send the frame, and forwards it out the correct port? router switch repeater hub 9. How does a switch “learn” MAC addresses? b’ The switch comes loaded with the most frequently used addresses.

The switch reads ach frame and makes a note of where each MAC address came from. The switch uses a mathematical formula to determine what the MAC address would be for each computer connected to it. 10. Why is the use of a switch preferred over a hub? Switches can operate in full-duplex mode. All of the above. 11. what d, broadcast unicast Devices on a hub have to share the available bandwidth. Switches are intelligent; they read the frame and determine where to send it. is a packet called multicast anycast -3-1Fng* that is intended for only one individual computer? 12. What is the purpose of the default route? It serves as a guideline for how to configure routes. It’s a route set by Microsoft so that all information comes to their servers first. It’s where the router sends all packets with destinations of which it has no knowledge. None of the above 13. Which of the following is not a form of electromagnetic interference, or EMI? rain/fog transformer fluorescent lights crosstalk 14. When a signal travels across network medium, it signal loses strength the further it gets from the transmitting station, to the point where the receiving station can no longer interpret the signals correctly.

What is the term for this phenomenon? a. electromagnetic interference attenuation radio frequency interference 15. Which of the following is a length of cable that connects a computer to either a networking device or to an patch panel cable segment backbone cable patch cable 16. In what layer does the NIC operate? Network Access Internetwork Transport Application 17. Which protocol is responsible for determining the MAC address associated with each IP address and keeping a table of its results? MAC DNS ARP NAT -4-l$nge 18. An IP address consists of four octets separated by periods.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp