Cyber Security Is a Part of Today
Cyber security cannot be achieved by technical approaches in the absence of people as developers, users, operators or opponents on their own in system roles. There are major security disruptions due to people who mistrust, organizations, or opponents who discover and use design defects. They also create perverse incentives. Effectively addressing safety needs requires an understanding of the human situation.
Here are my questions, collected from the given assignment. How does cyber security influence the company or organization’s real world? Is it wasteful on human or other resources, or is it a reflection of the balance between protecting and economically viable business and organization? What investments are worth and what do they receive? How is the perceived value and effectiveness of potential practices comparable? What evidence would contribute to this assessment?
Behavioral, social and decision making research provides a reservoir of knowledge to address certain of these problems and to help those responsible for vulnerable systems to continue to conduct research. Such expertise can also be vital for showing any connections between purpose and actual use, especially during the design stage, and for articulating the diversity and contexts of potential users. The relevant areas are sociology, economic, decision-making, and psychology, and linguistics. Although law is not a science, it often synthesizes scientific findings with a broad and inclusive view of science. In general, there is a lack of knowledge of tools and methods for analytical social science in the traditional cyber security community. Therefore, it not only depends on intuition but also on scientific evidence in experimental design, confusion variables, confounding classes in different areas and environments, etc. Furthermore, this approach is also a difficult task for the organizations.
Working and understanding of research and publication standards with social scientists would create new tools and insights. The basic psychological impediment to this cooperation is the unjustified confidence of system designers and operators in their intuitive theorization of the behavior of others. The main institutional barrier to social, behavioral, and decision-making expertise is that the cyber security research organizations lacks these disciplines in large part. In fact, the organizations often lacks the capacity to identify these needs, recruit the expertise to deal with them, and critically assess system safety claims that are jeopardized by human design, deployment, training, or management failures.
The cyber security community has to improve its own research methods and science theories without this expertise: experimental confusing identification, design, sensitivity analyses and meta-analysis, etc. Conversely, they have limitations to their knowledge of how to use (and explain) their research in these fields and identify the single fundamental research issue that they raise, given that science has not been drawn to address the unique challenges of cyber security.
In order to create more effective collaborations we must find ways to promote interactions that address the needs of researchers and practitioners in these two disciplines and in the cybersecurity community. Part of what multidisciplinary researchers can do is to ensure that use of a result is still true in one context in another. The results described in one environment may not be found in another environment, and environmental differences and the importance of these differences may be not clear.
Cybersecurity has long recognized and understood the seriousness of issues like insider risk, unrecognized design defects (e.g. legacy systems). It should be more effective to directly involve the social, compartmental and decision-making sciences that try to create cybersecurity versions of these disciplines. Safety and technological approaches must be seen in the broader context of all that is needed in the social-technical field within a user or organization. How can users and organizations, for example, be able to perform safely, particularly when security tasks and activities are often competing for time and effort with other tasks? Social science can shed light on the efficacy of users to achieve accomplishments without considerable effort.
More importantly, researchers can start or expand this type of research by understanding why users and organizations are supposed to perform safety-related tasks rather than just focusing on security mechanisms. User authentication is required, so which authentication mechanisms are most efficient? And in which contexts are which mechanisms most effective? The answers to these questions largely depend on how users and companies perceive that authentication, the time, effort and compromise between authentication and the fulfillment of the primary tasks is necessary (i.e. the actual tasks for which the users are rewarded).
Human behavior impacts every stage of the life cycle of a cyber-security: implementing, designing, assessing, monitoring, operating, revising, maintaining and training. Each step offers the opportunity to improve or reduce vulnerability: the forming and administration of design teams, the testing of usability procedures and interfaces, the incentives and resources for safety, the training and evaluation of operators and how the case of business is being understood. Conflicts with stakeholders are known issues with software engineering and conflict detection and management methods can be evaluated and modified, including security utility, and usability.
Cyber-security professionals must also be prepared, in modern threat environments, for approaches based on social sciences which are taken and reacted by hostile adversaries. An example of the manifold and cross disciplinary nature of the challenge of cyber security is the emerging Internet of Things.