Research methodologies – Analysis of the definition

Man has always been curious to know about himself and his surroundings. Every individual is keen to be able to distinguish between reality and falsehood but more than often his or her thirst for the truth is left unquenched. The reason for this is that the methods or ways he or she is using to dig out the truth are not trustworthy. This happens because unfortunately our societies and cultures do not encourage social research. Rather people prefer to sit back at home and rely on alternative sources which are not based on any scientific methods or researches.

There are different ways and means through which we acquire knowledge. This knowledge may be highly scientific or know how about routine things. The best of all the sources of knowledge is social research. Social research is defined as a collection of methods people use systematically to produce knowledge. It is more structured organized and systematic process than the knowledge based on alternative sources.

It rarely happens that we use social research in order to find the answer to our every day questions rather we use alternative sources of knowledge. These sources could be:

1) the word of the authority,

2) traditions,

3) common sense,

4) media myth,

5) personal experiences.

All these sources are weaker as compared to social research. We use these sources only because we lack motivation to find out the reality. Just out of our laziness we decide to rely on these sources of knowledge.

To speak of the word of authority as a source of knowledge it is not at all a reliable one. By authority we mean parents, or government, chief executive of any firm or any body who is authoritative. The authority who ever it may be (parents, government, etc.) would mould the truth in a way which is better for itself. The authority is always biased in one of the other manner. We can find many different examples to satisfy the above argument, for instance state owned TV channels keep on giving biased statements about the government policies.

They always side with the government and appose the opposition. In Pakistan PTV Khabarnama is the final word for a layman but those who are exposed to other sources of knowledge would agree that PTV Khabarnama is full of prejudices and exaggerations. A secondary example could be that of teachers, young children are so much influenced by their teachers that even if the teacher has committed a spelling mistake they would insist in front of any other person teaching them that their teacher is right.

Another weak source of knowledge is traditions. Especially in those areas of the world where literacy rate is low and education is less people blindly follow traditions. Whenever they are facing a problem they would want to look up to the traditional solution. For instance when some one looses hair he or she never goes to a doctor rather sits at home and apply all kinds of hair oils recommended by grandmothers. In extreme cases people blindly follow superstitions which have no scientific arguments. Traditions vary from culture to culture. Something which is considered to be right in the eastern culture might be considered wrong in the western culture therefore traditions cannot be taken as an authenticated source of knowledge.

Common sense is another way by which people tend to find answers to their questions or solutions to their problems. This is the most commonly used source of knowledge. Over time human beings learn many things which later become a part of their common sense, more than any other thing they would rely on their common sense. For example if some has launched a new product in the market and has met over whelming success, he or she would increase the production out of common sense. However it might be the case that the initial success was only a result of ‘fancy sales’. Research would have helped him to reach to a conclusion that should he or she have increased the sales or not. Some times common sense proves to be right but at others it does not therefore it can not be relied upon.

Media is a great source of information and henceforth knowledge. It has to be taken into consideration that media does not only inform or entertain people it also moulds public opinion about a certain thing. Formation of ideas is one of the major jobs of media. Media might be books, newspapers, TV or anything which comes under the caption of mass communication. Media is very powerful as it leaves an impact on the minds of the people. This way media has created many myths. A layman does not even questions that whether what media shows is truth or falsehood.

There are many things we claim to know about, but have never come across them face to face. The knowledge we have about them is through media. It could be a place, a human being, a product, or even any concept. For instance, no one has ever met a Ginny but even a child has a concept that a ginny is huge, horrible looking, with big teeth and big ears. This concept has been learnt from the media in this case story books and cartoons for the children. Another example is that CNN never shows Israel as an aggressor state as a result an average American does not even know that Israel is an aggressor state. On the other hand research and historical facts show that Israel has been unfair with Palestinians.

The weakest source of knowledge is personal experience but we as human beings believe it to be the strongest. No individual is ready to say that what he has seen with his naked eyes could be wrong or a misunderstanding. He or she would base his future decisions in that perspective. For example if one goes to a restaurant and has the chance to have a dish which he or she finds delicious, that individual would subconsciously keep believing that this particular restaurant sells tasty food. On the other hand if some individual goes to the same restaurant but does not get the chance to have a tasty dish would believe that the restaurant sells rotten food. However both the individuals might not have experienced the truth. Truth could only be experienced through research, which is going to the restaurant again and again and taking the viewpoint of the people coming there over and over again.

The above arguments prove that truth or reality can be revealed only through research. All the alternative sources of knowledge that we use are weak. They can be used but they cannot be relied upon. In order to make worthwhile and professional decisions we can just not depend upon these alternative sources. We have to carry out a social research in order to find out the truth about a certain thing. the reason is that research is always based on facts and figures, it is organized and systematic so it cannot fail. Research does not condemn the alternative sources of knowledge rather it uses them in an organized manner with research process, facts and figures to dig out the truth.

Read more

Introduction to Statistics

Random Sample: each member of the population has the same chance of being selected Representative Sample: characteristics should represent those of the target population without bias Observational Study: no intervention by the investigator, no treatment imposed Experimental Study: Investigator has some control over the determinant Variables: Categorical – each observation falls into a feline number of groups Nominal: named variables with no implied order e. G. Personality type Ordinal: grouped variables with implied order e. G. Veil of education Continuous – measured variables Discrete: take discrete values e. G. Number of children Numerical: can assume any value within a certain range/elemental e. G. Height Types of Designs: True experiment: researcher has potential to randomly allocate observations to conditions Quasi-experiment: demonstrate a relationship between an IV/DVD researcher makes use of naturally occurring groups, can’t make cause and effect statements Non-experiments (correlation design): question If there Is a relationship between variables, can’t make cause & effect statements

Between groups: two groups being compared on some outcome measure Within-subjects: participants experience each condition of an IV, with measurements of some outcome taken on each occasion Extraneous variables: variable present In an experiment, which might Interfere with the relationship between IV & DVD Confounding variables: mediating variable that can adversely affect the relation between IV/DVD Internal validity: extent to which a casual relationship can be assumed between IV & DVD.

External validity: degree to which you can generalize the results of your study to mom underlying population T-test One sample t-test – A: data should arise from a normal population Paired t-test -A: must be independent, arise from a normal distribution & populations of same spreads Independent sample – A: normally distributed, homogeneity of variances, independence of the observations Correlation/Regression – A: the relation in the population is linear, the residuals in y have a constant standard deviation and the residuals arise from a normal distribution detests of good fit and test of independence – A: expected count has to be larger than five

Read more

Notes Experimental Psych Overview

Sociology Biology Chemistry Physics Astronomy Anthropology Psychology Others Outer circle (CO) : Art Music Literature Language Solvable and unsolvable Problems Solvable problem- one which poses a question that can be answered with the use of normal capacities (answers questions under the inner and outer circle) Unsolvable problem – raises a question that is unanswerable. This concerns supernatural phenomena (falling under Metaphysical disciplines) Science is Empirical (Observable) Solvable problems are susceptible to empirical solution by studying observable vents Science Defined 1 .

Sciences apply the scientific method to solvable problems 2. Dullness’s In the CO don’t use the SMS but their problems are typically solvable 3. The dullness’s outside the circles neither use the SMS nor pose solvable problems C] Science is the application of the SMS to solvable problems. Psychology as a Science Psychology Is Materialistic, Objective and Deterministic If psychology Is ever to become a science, It must follow the example of the physical sciences: it must be materialistic, mechanistic, deterministic, objective. -Watson

Materialism (Same as Physicality) – observable responses, physical events Objectivity – the principle of intersecting reliability Intersecting- two or more people share the same experiences Determinism- the assumption that there is lawfulness Experimentation is the most powerful research method 0 Psychology became a science by applying the SMS to solvable problems. Psychological experimentation is an application of the SMS Stating the Problem and Hypothesis Testing the Hypothesis 1. Select participants 2. Randomly assign to groups 3. Randomly assign groups to condition/treatment . Experimental group given a novel treatment b.

Control group given normal treatment 4. Define the IV 5. Define the DVD 6. Control relevant EVE 7. Conduct statistical tests 8. Generalize and explain the hypothesis 9. Predict new situations Terms 1 . Replication – an additional experiment is conducted but with the same process 2. Stimuli – aspects of the external environment 3. Response – aspects of behavior 4. S-R Laws – if a certain environmental characteristics is changed, behavior of a certain type also changes 5. Variable – anything that can change in amount 6. Independent variable – manipulated, treatment, investigation 7. Dependent variable – measure of any change in behavior 8.

Continuous variable – capable of changing by any amount 9. Discontinuous variable – assume only numerical values that differ by clearly defined steps without intermittent values possible 10. Hypothesis -tentative solution to problem Functions of Apparatus 1. To administer experimental treatment 2. To collect data 3. To reduce experimenter influences 4. To analyze data specifically Conducting Statistical Tests Chance difference Reliable difference Real 0 statistically reliable Accidental 0 due only to chance Significant 0 reliable (Preferable) Confirmed 0 probably true Discontinued 0 probably false it can be measured. J.

B. Watson – If psychology is ever to become a science, it must follow the example of the physical sciences: it must be materialistic, mechanistic, deterministic, and objective. Chapter 2 – The Problem Problem Scientific inquiry starts when we have already collected some knowledge but there is something we still do not know Ways Problem is Manifested 1 . When there is a noticeable gap in the results of investigations Students conducting thesis are reading related literature so their storehouse of information is filled with new knowledge 2. When the results of several inquiries disagrees The results are contradicting 3.

When a fact exists in the form of unexplained information When a new theory explains a fact, it also explains other phenomena, because theories are general that it can explain many facts Defining a Solvable Problem 1. The proposed solution is Testable 2. The proposed solution is Relevant to the problem A. What is a testable hypothesis? A. If it is possible to determine that it is either true or false B. Knowledge is expressed in the form of propositions a. The requirement that knowledge can occur only in the form of a statement is critical for the process of testability. C.

Degree of Probability Instead of True or False Kinds of Possibilities 1 . Presently attainable – the possibility is within our power at the present time 2. Potentially attainable – possibilities that may come within the powers of people at some future time Classes of Testability 1 . Presently testable – related with Presently attainable 2. Potentially testable – related to Potentially attainable Working Principle for the Experimenter . Applying the criterion of Testability a. Do all the variables contained in the hypothesis actually refer to empirically observable events? B.

Is the hypothesis formulated in such a way that it is possible to relate it to empirically observable events and render a decision on its degree of probability? Term: determine the degree of probability for them. Unsolvable Problems The Unstructured Problem Inadequately defined terms and the operational definition Solution Through Operational Definitions Operational definitions – one that indicates that a certain phenomenon exists, and sees so by specifying precisely how the phenomenon is measured Operations – adequate definitions of the variables with which a science deals are a prerequisite to advancement.

Initiated by P. W. Abridgment in 1972 Impossibility of Collecting Relevant Data Vicious circularity renders problems unsolvable Additional considerations Problems should be technologically or theoretically important Problems of the impasse variety should be avoided unless creative solutions are possible Psychological reactions to problems- we should emphasize a truth criterion and not dismiss a discovery only because it is disturbing

Read more

Fourier Transform Infrared Spectroscopy

Introduction The range of Infrared region Is 12800- 10 cm-l. It can be divided into near-infrared region (12800 – 4000 crn-ll mid-infrared region (4000 – 200 crnl ) and far-infrared region (50 ” 1000 cm-l). scientists have established various ways to utilize infrared light. Infrared absorption spectroscopy is the method which scientists use to determine the structures of molecules with the molecules’ characteristic absorption of infrared radiation. Infrared spectrum is molecular vibrational spectrum.

When exposed to Infrared radiation, sample molecules selectively absorb radiation of pecific wavelengths which causes the change of dipole moment of sample molecules. Consequently, the vibrational energy levels of sample molecules transfer from ground state to excited state. The frequency of the absorption peak is determined by the vibrational energy gap. The number of absorption peaks is related to the number of vibrational freedom of the molecule. The intensity of absorption peaks is related to the change of dipole moment and the possibility of the transition of energy levels.

Therefore, by analyzing the infrared spectrum, one can readily obtain abundant structure information of a molecule. Most molecules are infrared active except for several homonuclear diatomic molecules such as 02, N2 and C12 due to the zero dipole change in the vibration and rotation of these molecules Concept: Fourier transform spectroscopy Is a less Intuitive way to obtain the same Information. Rather than shining a monochromatic beam of light at the sample, this technique shines a beam containing many frequencies of light at once, and measures how much of that beam Is absorbed by the sample.

Next, the beam Is modified to contain a different combination of frequencies, giving a second data point. This process is repeated many times. Afterwards, a computer takes all these data and works backwards to Infer what the absorption Is at each wavelength The beam described above is generated by starting with a broadband light source” one containing the full spectrum of wavelengths to be measured. The light shines into a Michelson interferometer”a certain configuration of mirrors, one of which is moved by a motor. As this mirror moves, each wavelength of light in the beam is periodically blocked. ransmitted, blocked, transmitted. by the Interferometer, due to wave interference. Different wavelengths are modulated at different rates, so that at each moment, the beam coming out of the interferometer has a different spectrum. Fourier Transform of Interferogram to Spectrum The interferogram is a function of time and the values outputted by this function of time are said to make up the time domain. The time domain Is Fourier transformed to get a frequency domain, which is deconvoluted to product a spectrum Step 1: The first step is sample preparation. The standard method to prepare solid sample for FTIR spectrometer is to use KBr.

About 2 mg of sample and 200 mg KBr re dried and ground. The particle size should be unified and less than two micrometers. Then, the mixture is squeezed to form transparent pellets which can be measured directly. For liquids with high boiling point or viscous solution, it can be added in between two NaCl pellets. Then the sample is fixed in the cell by skews and measured. For volatile liquid sample, it is dissolved in CS2 or CC14 to form 10% solution. Then the solution is injected into a liquid cell for measurement. Gas sample needs to be measured in a gas cell with two KBr windows on each side. The gas cell should first be vacuumed.

Then the sample can be introduced to the gas cell for measurement. Step 2: The second step is getting a background spectrum by collecting an interferogram and its subsequent conversion to frequency data by inverse Fourier transform. We obtain the background spectrum because the solvent in which we place our sample will have traces of dissolved gases as well as solvent molecules that contribute information that are not our sample. The background spectrum will contain information about the species of gases and solvent molecules, which may then be subtracted away from our sample spectrum in order to gain nformation about Just the sample.

Figure 6 shows an example of an FTIR background spectrum. Figure 6. Background IR spectrum The background spectrum also takes into account several other factors related to the instrument performance, which includes information about the source, interferometer, detector, and the contribution of ambient water (note the two irregular groups of lines at about 3600 cm-l and about 1600 cm-l in Figure 6) and carbon dioxide (note the doublet at 2360 cm-l and sharp spike at 667 cm-l in Figure 6) present in the optical bench.

Step 3: Next, we collect a single-beam spectrum of he sample, which will contain absorption bands from the sample as well as the background (gaseous or solvent). Step 4: The ratio between the single-beam sample spectrum and the single beam background spectrum gives the spectrum of the sample (Figure 7). Advantages: Speed: Because all of the frequencies are measured simultaneously, most measurements by FT-IR are made in a matter of seconds rather than several minutes.

This is sometimes referred to as the Felgett Advantage. Sensitivity: Sensitivity is dramatically improved with FT-IR for many reasons. The detectors employed are uch more sensitive, the optical throughput is much higher (referred to as the enable the coaddition of several scans in order to reduce the random measurement noise to any desired level (referred to as signal averaging). ? Mechanical Simplicity: The moving mirror in the interferometer is the only continuously moving part in the instrument. Thus, there is very little possibility of mechanical breakdown. Internally Calibrated: These instruments employ a HeNe laser as an internal wavelength calibration standard (referred to as the Connes Advantage). These instruments are self-calibratingand never need to be calibrated by the user.

Read more

Correlational Research: Overview

Correlational Research “Correlation is a statistical technique that can show whether and how strongly pairs of variables are related” (Creative Research Systems, 2010). Correlation research method is used in scientific research to study the association and/or relationship between variables. When the association between two variables becomes correlation coefficient, it is being calculated through quantitative measure. The goal for using this method is to observe if one or more variables cause and predict other variables, without having a causal relationship between them (Creative Research Systems, 2010).

One great article I found is about : “Can Money Buy Happiness: Are Lottery Winners any Happier in The Long Run? ” At first people see how happy and ecstatic people that win the lottery are on television, however, past that point, there are no details on how their life is from there on. The question of whether they are happier or not still remains. The researchers developed the study by asking two paralyzed accident victims, a control group and lottery winners about their level of happiness. There was no statistically significant difference between the lottery winners and the control group with respect to how happy they were at this stage of their lives” (Brikman, 1978). The control group as well as the lottery winners did not give any “evidence” of how happy they are going to be in couple of years (statistically insignificant). The lottery winners did not think, judge or be concerned about how happy they will be in few years, as the accident victims did. The results were that the relationship between money and the level of happiness is not linear.

The increase of money might or might not increase your happiness (depends on the events). “These findings may also suggest that happiness may be relative. We may not be able to reach a higher level of happiness as a result of winning the lottery. Winning the lottery may simply raise our standards” (Brikman, 1978). Researchers may use correlational method to determine variables between characteristics, attitudes, behaviors and events. As I mentioned before, the goal of this method is to find out if there is a direct relationship between the variables, as well as any commonalities in each relationship.

Even though it does not indicate the cause and effect relationship, when it’s present, one variable might reflect the change of the other variable. The only way a researcher would find out the effects a variable has on another variable is through research and experiment (Wiley, 2011). When it comes to using correlational method, to develop research, causation must not be used, because it cannot prove that one variable can change the other. The method only shows, in a systematic way that the variables are related. To be able to prove the cause and effect relationship and experimental method must be used.

In this case, being that it cannot test the cause and effect, the result can be deceiving and misinterpreted, especially when there are more than two variables involved. When cause and effect cannot be proved through this method, assumption is done, which leads to error in the outcome. Therefore, as mentioned before, correlation does not mean causation, which is a limitation when it comes to conclusions to be made (Bradley, 2000). Positive correlation occurs when the increase of one variable impacts the increase of another variable.

For example, the more money people win, the happier they are, however it does not specify long term results. Being that they are happy for the moment, the more times they win, the happier they get, which results in a positive correlation. When it comes to negative correlation, the variables work the opposite: when one increases it impacts the other one to decrease. In the example above, I would say that people that win the lottery also have a lot more responsibilities to handle, which occurs in more effort, time and energy.

Their happiness level might increase for the moment; however, it will start to decrease in the long run, due to all the extra “work” and pressure they would have. Last, but not least, there is the zero correlation that occurs when there is no relationship between the variables. The example study above wants to demonstrate if more money brings more happiness. Being that the two of the subjects did not win the lottery, it would not really prove whether or not money would bring them happiness. Asking about their current and future level of happiness has no correlation with the people that already won the lottery and whose life has hanged (McLeod, 2008). Being that the correlation coefficient does not reflect nonlinear association between two variables, “the correlation coefficient measures whether there is a trend in the data, and what fraction of the scatter in the data is accounted for by the trend”, as opposed to how nearly a scatterplot follow a straight line (Stark, 2011). Being that correlation coefficient only measures linear relationships, it is possible to see a nonlinear relationship when r is close to 0 or even 0.

In this case, the diagram indicates a slight presence of existence of nonlinear relationship between the two variables. If the “r” is not correctly interpreted, the result will make no sense and therefore a non-sense correlation would occur. One can also assume that the two variables are related, however, it cannot prove that “r” is the cause and effect of the relationship between the two (Amit, 2009). Another problem that might occur is when the function has multiple independent variables it is very hard to attribute changes to one independent variable.

That is why it is important for a researcher to make sure the research is being developed and experimented within the two variables, which means selecting the most significant and credible of the correlated variables and use in the function. When it comes to floor and ceiling effects, a researcher analyzes “data using analysis of variance, the interaction effect would very likely be statistically significant” (Zechmeister, 2001). When a minimum is scored in any condition of the experiment, a floor effect occurs. However, when a maximum performance occurs, ceiling effect happens.

In this case the researcher has the choice to select the dependent variables, to avoid the floor and ceiling and effect, and determine differences across conditions. “Thus, the danger of floor and ceiling effects is that they may lead researchers to believe an interaction is present in the data, when in fact the interaction occurs because the measurement scale does not allow the full range of responses that participants could make” (Zechmeister, 2001, p. 202). When the experiment is too easy, the experimental manipulation in the ceiling effect shows little or no effect.

It can be reduced by making the experiment harder and more challenging. When the task is too challenging or too difficult, the experimental manipulation will not be able to show effect (floor effect). It can be reduced or eliminated by making the experimental task more difficult, that way it will balance. A pilot study may also be conducted to find out if a floor effect or a ceiling effect is present (Huron, 2000). References Bradley, Megan (2000). Cyberlab for Psychology Research. Methods of Research. Retrieved November 26, 2011, from url: http://faculty. frostburg. du/mbradley/researchmethods. html#corr Brickman, P. , Coates, D. , Janoff-Bulman, R. (1978). Lottery Winners and Accident Victims: Is Happiness Relative? Journal of Personality and Social Psychology. Retrieved November 26, 2011, from url: http://www. psychologyandsociety. com/lotterystory. html Choudhury, Amit (2009). Statistical Correlation. Retrieved November 26, 2011, from url: http://www. experiment-resources. com/statistical-correlation. html Creative Research Systems (2010). The Survey System. Correlation. Retrieved November 26, 2011, from url: http://www. urveysystem. com/correlation. htm Huron, David (2000). Glossary of Research Terms in Systematic Musicology. Retrieved November 26, 2011, from url: http://musicog. ohio-state. edu/Music829C/glossary. html#floor effect McLeod, Saul (2008). Simply Psychology. Correlation. Retrieved November 26, 2011, from url: http://www. simplypsychology. org/correlation. html Stark, P. B (2001). Correlation and Association. Chapter 7. Retrieved November 26, 2011, from url: http://www. stat. berkeley. edu/~stark/SticiGui/Text/correlation. htm Wiley, John (2011). CliffNotes.

Research Designs and Methods. Retrieved November 26, 2011, from url: http://www. cliffsnotes. com/study_guide/Research-Designs-and-Methods. topicArticleId-26831,articleId-26754. html Zechmeister, J. S. , Zechmeister, E. B. , & Shaughnessy, J. J. (2001). Essentials of Research Methods in Psychology. Chapter 5. New York: McGraw Hill. Retrieved from Kaplan University DocSharing. Zechmeister, J. S. , Zechmeister, E. B. , & Shaughnessy, J. J. (2001). Essentials of Research Methods in Psychology. Chapter 7. New York: McGraw Hill. Retrieved from Kaplan University DocSharing.

Read more

Barriers of Research Utilization for Nurses

C L I N I C A L N U R S I N G IS S U E S Bridging the divide: a survey of nurses’ opinions regarding barriers to, and facilitators of, research utilization in the practice setting Alison Margaret Hutchinson BAppSc, MBioeth PhD Candidate, Victorian Centre for Nursing Practice Research, School of Nursing, University of Melbourne, Australia Linda Johnston BSc, PhD, Dip N Professor in Neonatal Nursing Research, Royal Children’s Hospital, Melbourne, and Associate Director, Victorian Centre for Nursing Practice Research, Melbourne, Australia Submitted for publication: 4 March 2003 Accepted for publication: 29 August 2003

Correspondence: Alison M. Hutchinson School of Nursing University of Melbourne 1/723 Swanston St Carlton, VIC 3053 Australia Telephone: ? 61 3 8344 0800 E-mail: alihutchinson@bigpond. com H U T C H I N S O N A . M . & J O H N S T O N L . ( 2 0 0 4 ) Journal of Clinical Nursing 13, 304–315 Bridging the divide: a survey of nurses’ opinions regarding barriers to, and facilitators of, research utilization in the practice setting Background. Many researchers have explored the barriers to research uptake in order to overcome them and identify strategies to facilitate research utilization.

However, the research–practice gap remains a persistent issue for the nursing profession. Aims and objectives. The aim of this study was to gain an understanding of perceived in? uences on nurses’ utilization of research, and explore what differences or commonalities exist between the ? ndings of this research and those of studies that have been conducted in various countries during the past 10 years. Design. Nurses were surveyed to elicit their opinions regarding barriers to, and facilitators of, research utilization.

The instrument comprised a 29-item validated questionnaire, titled Barriers to Research Utilisation Scale (BARRIERS Scale), an eight-item scale of facilitators, provision for respondents to record additional barriers and/or facilitators and a series of demographic questions. Method. The questionnaire was administered in 2001 to all nurses (n ? 761) working at a major teaching hospital in Melbourne, Australia. A 45% response rate was achieved. Results. Greatest barriers to research utilization reported included time constraints, lack of awareness of available research literature, insuf? ient authority to change practice, inadequate skills in critical appraisal and lack of support for implementation of research ? ndings. Greatest facilitators to research utilization reported included availability of more time to review and implement research ? ndings, availability of more relevant research and colleague support. Conclusion. One of the most striking features of the ? ndings of the present study is that perceptions of Australian nurses are remarkably consistent with reported perceptions of nurses in the US, UK and Northern Ireland during the past decade. Relevance to clinical practice.

If the use of research evidence in practice results in better outcomes for our patients, this behoves us, as a profession, to address issues surrounding support for implementation of research ? ndings, authority to 304 O 2004 Blackwell Publishing Ltd Clinical nursing issues Barriers to, and facilitators of, research utilization change practice, time constraints and ability to critically appraise research with conviction and a sense of urgency. Key words: barriers to research utilization, facilitators of research utilization, research dissemination, research implementation, research utilization

Introduction and background For over 25 years research utilization has been discussed in the nursing literature with growing enthusiasm and amid increasing calls for the use of research ? ndings in practice. Additionally, the evidence-based practice movement, which emanated in the early 1990s (Evidence-Based Medicine Working Group, 1992) has highlighted the importance of incorporating research ? ndings into practice. Furthermore, controversy surrounding the achievement of professional status has resulted in an increased awareness of the need for a research-based body of knowledge to underpin nursing practice.

Gennaro et al. (2001, p. 314) contend: Using research in practice not only bene? ts patients but also strengthens . If nursing is truly a profession, and not just a job or an occupation, nurses have to be able to continually evaluate the care they give and be accountable for providing the best possible care. Evaluating nursing care means that nurses also have to evaluate nursing research and determine if there is a better way to provide care. Twelve years prior, Walsh & Ford (1989) warned that the professional integrity of nursing was threatened by dependence upon experience-based practice.

Similarly, Winter (1990, p. 138) cautioned that conduct of nursing practice in this manner is ‘the antithesis of professionalism, a barrier to independence, and a detriment to quality care. ’ Winter therefore, recommended that nurses ‘evaluate their status as research consumers, to identify problems in this area, and to develop means to better use research ? ndings’ (p. 138). Evidence-based practice, which should comprise the use of broad ranging sources of evidence, including the clinician’s expertise and patient preference (Sackett et al. , 1996), includes the use of research evidence as a subset (Estabrooks, 1999).

Consistent with the classi? cation of knowledge utilization, three types of research use have been outlined (Stetler, 1994a,b; Berggren, 1996). The ? rst is described as ‘instrumental use’ and involves acting on research ? ndings in explicit, direct ways, for example application of research ? ndings in the development of a clinical pathway. The second is termed ‘conceptual use’ and involves using research ? ndings in less speci? c ways, for example changing thinking. The ? nal type of research use, described as ‘symbolic use’, involves the use of research results to support a predetermined position.

The nursing literature is replete with examples of limited use of research in practice and discussion surrounding perceived barriers to research utilization (Hunt, 1981; Gould, 1986; Closs & Cheater, 1994; Lacey, 1994). Despite this, the phenomenon of the research–practice gap, the gap between the conduct of research and use of that research in practice, remains an issue of major importance for the nursing profession. Many researchers have explored the barriers to research uptake in order to overcome them and identify strategies to facilitate research utilization (Kirchhoff, 1982; MacGuire, 1990; Funk et al. 1991a,b, 1995b; Closs & Cheater, 1994; Hicks, 1994, 1996; Lacey, 1994; Rizzuto et al. , 1994; Hunt, 1996; Walsh, 1997a,b). Hunt (1981) suggested that nurses fail to utilize research ? ndings because they do not know about them, do not understand them, do not believe them, do not know how to apply them, and are not allowed to use them. According to Hunt (1997), the barriers to research utilization and, therefore, to evidence-based practice fall into ? ve main categories: research, access to research, nurses, process of utilization and organization.

Self-reported utilization of research is one method that has frequently been implemented to elicit the extent of research utilization. Responses to selected research ? ndings have been used to elicit and explore respondents’ awareness and use of respective ? ndings (Kete? an, 1975; Berggren, 1996). Numerous researchers have also undertaken to investigate, through self-reporting, the opinions of nurses’ in regard to barriers to research utilization in the practice setting. Funk et al. (1991b) explored research utilization in the US using a postal questionnaire titled the Barriers to Research Utilization Scale (BARRIERS Scale).

Their purpose was to develop a tool to assess the perceptions of clinicians, administrators and academics in regard to barriers to research utilization in clinical practice. Rogers’ (1995) model of ‘diffusion of innovations’, a theoretical framework, which describes the process of communication, through certain channels within a social network, of an idea, practice or object over time, was used to develop a 29-item scale. The questionnaire was sent out to a random sample of 5000 members of the American Nurses’ Association with a resulting response rate of 40%. 305

O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 A. M. Hutchinson and L. Johnston On the data generated, Funk et al. (1991b) undertook an exploratory factor analysis, to elicit a four-factor solution which closely corresponded with Rogers’ (1995) ‘diffusion of innovations’ model. The factors translated into characteristics of the adopter comprising the nurse’s research values, skills and awareness; the organization incorporating setting barriers and limitations; the innovation including qualities of the research; and communication including accessibility and presentation of the research.

Items associated with the clinical setting, a characteristic of the organization, were perceived as the main barriers to research utilization. These included the views that nurses lack suf? cient authority to implement change; nurses have insuf? cient time to implement change; and there is a lack of cooperation from medical staff. Approximately 21% of the respondents in this study were classi? ed as administrators. Over three quarters of the items on the BARRIERS Scale were rated as great or moderate barriers by over half the administrators. The administrators identi? d factors relating to the nurse, the organizational setting and the presentation of research among the greatest barriers. Overall, they cited the organizational setting as the greatest barrier to research use. Approximately 46% of the respondents were classi? ed as clinicians (nurses working in the clinical setting). The clinicians overwhelmingly identi? ed factors associated with the organizational setting as being the greatest barriers to research utilization. They rated all eight factors associated with the setting in the top 10 barriers to research utilization.

The clinicians rated perceived ‘lack of authority to change patient care procedures’, ‘insuf? cient time on the job to implement new ideas’ and being ‘unaware of the research’ as the top three barriers to research utilization. The BARRIERS Scale (Funk et al. , 1991b) has been used extensively since it was developed in 1991, as one method to explore the perceived in? uences on nurses’ utilization of research ? ndings in their practice. At least 17 studies that employed the BARRIERS Scale to elicit opinions of nurses regarding barriers to research utilization in practice have been reported in the nursing literature.

Most studies reported the barriers in ranked order according to the percentage of respondents who rated items as moderate or great barriers. Insuf? cient time to read research and/or implement new ideas was rated in the top three barriers in 13 studies (Funk et al. , 1991a, 1995a; Carroll et al. , 1997; Dunn et al. , 1997; Lewis et al. , 1998; Nolan et al. , 1998; Rutledge et al. , 1998; Retsas & Nolan, 1999; Closs et al. , 2000; Parahoo, 2000; Retsas, 2000; Grif? ths et al. , 2001; Marsh et al. , 2001; Parahoo & McCaughan, 2001).

A perceived lack of authority to change patient care procedures was reported in the top three barriers in eight studies (Funk et al. , 1991a; Walsh, 1997a; Nolan 306 et al. , 1998; Closs et al. , 2000; Parahoo, 2000; Retsas, 2000; Marsh et al. , 2001; Parahoo & McCaughan, 2001). In eight studies, the item ‘statistical analyses are not understandable’, was cited in the top three barriers (Funk et al. , 1995b; Dunn et al. , 1997; Walsh, 1997a,b; Rutledge et al. , 1998; Parahoo, 2000; Grif? ths et al. , 2001; Marsh et al. , 2001). ‘Inadequate facilities for implementation’ was cited in the top three barriers in ? e studies (Kajermo et al. , 1998; Nolan et al. , 1998; Retsas, 2000; Grif? ths et al. , 2001; Marsh et al. , 2001). Finally, the item ‘lack of awareness of research ? ndings’ was reported in the top three barriers in four studies (Funk et al. , 1991a, 1995a; Carroll et al. , 1997; Lewis et al. , 1998; Retsas & Nolan, 1999). It is acknowledged that these studies comprised varying populations of nurses, employed differing sampling methods, used sample sizes ranging from 58 to 1368 respondents and resultant response rates ranged from 27 to 76%.

In some studies, minor rewording of a limited number of items in the tool had been undertaken. Furthermore, some studies included only 28 of 29 barrier items included in the original BARRIERS Scale. Factor analysis, a statistical technique aimed at reducing the number of variables by grouping those that relate, to form relatively independent subgroups (Crichton, 2001; Tabachnick & Fidell, 2001), was undertaken in a limited number of these studies. In the UK, Dunn et al. (1997) tested the factor model proposed by Funk et al. (1991b), using con? rmatory factor analysis, a complex statistical technique used to test a heory or model (Tabachnick & Fidell, 2001). Attempts to load each item onto a single identi? ed factor were found to be unsuccessful and they concluded that the US model was inappropriate for their data. Closs & Bryar (2001) further explored the appropriateness of the BARRIERS Scale for use in the UK through exploratory factor analysis. The model identi? ed included the following four factors: bene? ts of research for practice, quality of research, accessibility of research, and resources for implementation. Finally, Marsh et al. (2001) tested, using con? matory factor analysis, a revised version of the BARRIERS Scale. The revision comprised minor changes in wording such as substitution of the term ‘administrator’ with the term ‘manager’. A factor structure that was not possible to interpret resulted and they concluded that the model proposed by Funk et al. (1991b) was not supported and had limited subscale validity in the UK setting. In the light of these ? ndings and those of Dunn et al. (1997), Marsh et al. (2001) suggested that the factor model arising from the original BARRIERS Scale was not sustained in the international context.

However, in Australia, Retsas & Nolan (1999) undertook an exploratory factor analysis resulting in a three-factor solution comprising: (i) nurses’ perceptions about the usefulness of research in O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 Clinical nursing issues Barriers to, and facilitators of, research utilization clinical practice, (ii) generating change to practice based on research, and (iii) accessibility of research. Again, in Australia, a four-factor solution arose from another exploratory factor analysis undertaken by Retsas (2000).

The resulting factors were conceptualized as: accessibility of research ? ndings, anticipated outcomes of using research, organizational support to use research, and support from others to use research. Given these ? ndings in the Australian context, an exploratory factor analysis was employed in the present study to explore what model would arise from data generated using the BARRIERS Scale. The aim of the present study was to gain an understanding of perceived in? uences on nurses’ utilization of research in a particular practice setting, and explore what differences or commonalities exist between the ? dings of this research and those of studies which have been conducted during the past 10 years in various countries around the world. This study was undertaken as part of a larger study designed to explore the phenomenon of research utilization by nurses in the clinical setting. The relative importance of barrier and facilitator items and the factor model arising from this data will in? uence development of future stages of this larger study. who then took responsibility for distribution. It cannot be guaranteed, however, that this process in fact resulted in all nurses receiving the questionnaire.

The questionnaire included the 29-item BARRIERS Scale in addition to an eight-item facilitator scale and a series of demographic questions. The respondents were asked to return completed questionnaires in the self-addressed envelope supplied, by either placing them in the internal mail or placing them in the ‘return’ box supplied in their ward or department. Return of completed questionnaires implied consent to participate and all responses were anonymous. Setting The setting for this study was a 310-bed major teaching hospital offering specialist services in Melbourne, Australia. Sample

Approximately 960 nurses work in the organization. All Registered Nurses working during the 4-week distribution time frame were invited to complete the questionnaire. This self-selecting, convenience sample therefore, excluded nurses on leave at the time of the study. The study The research question addressed in this study was: What are nurses’ perceptions of the barriers to, and facilitators of, research utilization in the practice setting? Instrument The questionnaire comprised three sections. The ? rst section contained the 29 randomly ordered items from the Barriers to Research Utilization Scale (Funk et al. 1991b), which respondents were asked to rate, on a four-point Likert type scale, the extent to which they believed each item was a barrier to their use of research in practice. The options included 1 ? ‘to no extent’, 2 ? ‘to a little extent’, 4 ? ‘to a moderate extent’ and 5 ? ‘to a large extent’. A ‘no opinion’ ? 3 option was also given. The respondents were then asked to nominate and rate (1 ? greatest barrier, 2 ? second greatest barrier, and 3 ? third greatest barrier) the items they considered to be the top three barriers.

Further to this, the respondents were given the opportunity to list and rate, according to the above-mentioned Likert scale, any additional items they perceived to be barriers. The second section of the survey contained eight items (Table 4), which respondents were asked to rate according to the extent to which they considered them to be a facilitator of research utilization using the Likert scale described above. The respondents were also asked to nominate and rate, from 1 to 3, the items they considered to be the three greatest facilitators of research utilization.

Again, the respondents were given the opportunity to list and rate, according to the 307 Method A survey design was chosen to elicit opinions of nurses. This method was selected because the ‘BARRIERS Scale’, a validated questionnaire, based on the work of Funk et al. (1991b), and designed to elicit nurses’ views about barriers to, and facilitators of, research utilization in their practice, was found to have high reliability. Approval to use the tool was gained from the authors. Permission was also given to include questions crafted by the investigators to elicit nurses’ opinions about facilitators of research utilization.

Approval to conduct the project was sought and granted by the hospital research ethics committee to ensure the rights and dignity of all respondents were protected. Nurses working during the 4-week survey distribution time frame (n ? 761) were invited to complete the self-administered questionnaire. It was intended that every nurse receive a personally addressed envelope containing the questionnaire and a self-addressed return envelope. To facilitate this, the envelopes were hand delivered to a nominated nurse on each ward or department O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315

A. M. Hutchinson and L. Johnston Likert scale, perceived facilitators not listed in the survey. Section 3 of the survey included a series of demographic questions. Validity Content validity, i. e. whether the questions in the tool accurately measure what is supposed to be measured (LoBiondo-Wood & Haber, 1998), of the instrument was supported by the literature on research utilization, the research utilization questionnaire developed by the Conduct and Utilization of Research in Nursing Project (Crane et al. , 1977), and data gathered from nurses. Input was also gained from experts in the ? ld of research utilization, nursing research, nursing practice and a psychometrician to establish face validity, i. e. whether the tool appears to measure the concept intended (LoBiondo-Wood & Haber, 1998), and content validity from an extensive list of potential items. Those items for which face and content validity were established were retained. Further to piloting of the instrument, two additional items were included and some minor rewording of other items resulted. The BARRIERS Scale has been found to have good reliability, with Cronbach’s alpha coef? ients of between 0. 65 and 0. 80 for the four factors, and item-total correlations from 0. 30 to 0. 53 (Funk et al. , 1991b). Cronbach’s alpha is a measure of internal consistency, which is related to the reliability of the instrument. A Cronbach’s alpha of ‡0. 7 is considered to be good. Internal consistency is the extent to which items in the scale measure the same concept (LoBiondo-Wood & Haber, 1998). Item total correlations refer to the relationship between the question or item and the total scale score (LoBiondo-Wood & Haber, 1998). Data analysis

Data analysis was performed using Statistical Package for the Social Sciences (version 10. 0; SPSS Inc. , Chicago, IL, USA) software. Frequency and descriptive statistics were employed to describe the demographic characteristics of respondents. Analysis of these data indicated that a wide cross section of nursing staff responded to the questionnaire. Factor analytic procedures were employed to reduce the 29 barrier items to factors. The ‘no opinion’ responses (coded to be in the centre of the scale) were included in the factor analytic procedure, on the basis of statistical advice.

Suitability of the data for undertaking factor analysis is determined by testing for sampling adequacy and sphericity. The Kaiser–Meyer–Olkin Measure of Sampling Adequacy at 0. 83 was in excess of the recommended value of 0. 6 (Kaiser, 1974), indicating that the 308 correlations or factor loadings, which re? ect the strength of the relationship between barrier items, were high. The Bartlett test of sphericity at 2118. 3 was statistically signi? cant (P < 0. 001). On the basis of these results, factor analysis was considered appropriate.

The factor analysis method employed consisted of principal component analysis (PCA), a method of reducing a number of variables (barrier items) to groupings to aid interpretation of the underlying relationships between the variables (Crichton, 2000) whilst capturing as much of the variance in the data as possible. PCA revealed eight components with an eigenvalue exceeding one, indicating that up to eight factors could be retained in the ? nal factor solution. Inspection of the scree plot, a plot of the variance encompassed by the factors, failed to provide a clear indication for the number of factors to include.

Eight factors were considered too many to be meaningful, thus factor solutions from two to seven factors were explored. A solution comprising four factors was considered most meaningful. Examination of the factor loadings was then undertaken to determine which items belonged to each factor. Consistent with the procedure employed by Funk et al. (1991b), items were considered to have loaded if they had a factor loading of 0. 4 or more. Varimax rotation, a statistical method employed to simplify and aid interpretation of factors, was then applied.

Whilst factor analysis assists in reducing the number of variables to groupings and aids in interpretation of the underlying structure of the data, it does not identify the relative importance of individual items. Thus, while one factor may account for the largest amount of variance in the factor solution it does not mean that the items within that factor are the greatest barriers to research utilization. In order to determine the relative signi? cance of each barrier item, the number of respondents who reported them as a moderate or great barrier was calculated and items were ranked accordingly.

Additional barriers recorded by participants were grouped thematically. Similarly, to determine the relative signi? cance of each facilitator item, the number of respondents who reported them as a moderate or great facilitator was calculated and items were ranked accordingly. Additional facilitators recorded by participants were grouped thematically. Results Demographics A total of 317 nurses returned the questionnaires, representing a 45% response rate, assuming that all nurses did, in fact, receive a personally addressed envelope. The age range of respondents was 43 years (minimum ? 1 years, O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 Clinical nursing issues Barriers to, and facilitators of, research utilization maximum ? 64 years) while the range in years since registration was 42 years. The demographic characteristics of the nurses (Table 1) were consistent with those of the State of Victoria’s nursing workforce (The Australian Institute of Health and Welfare, 1999). Factor analysis A four-factor solution was selected as the most appropriate model arising from PCA of the 29 barrier items. This accounted for 39. % of the total variance in responses to all barrier items. The factor groupings including the loading for each barrier item and the titles allocated to each factor are included in Table 2. According to the correlation coef? cient or factor loading measure of ‡0. 4, two items, ‘research reports/articles are not published fast enough’, and ‘the research has not been replicated’, failed to load on any of the four factors. Table 1 Nurse demographics (n ? 317) Variable Gender Male Female Missing Age (years) Experience Registered Nurse (years) Clinical experience (years) Years since most recent quali? ation Highest quali? cation Division 2 certi? cate for registration Division 1 hospital certi? cate for registration Tertiary diploma/degree for registration Specialist nursing certi? cate Graduate diploma Masters by coursework Masters by research Others (including education and management quali? cations) Missing Principle job function Clinical Administrative Research Education Others Missing Research experience Yes No Missing N (%) Mean (SD) 24 (7. 6) 291 (91. 8) 2 (0. 6) 33. 8 (9. 73) 12. 6 (9. 95) 11. 35 (8. 8) 4. 28 (6. 52) 14 (4. 4) 23 (7. 3) 104 (32. 8) 26 34 9 1 87 (8. 2) (10. 7) (2. 8) (0. 3) (27. ) Factor 1, comprising eight items with loadings of 0. 73 to 0. 43, includes items relating to characteristics of the organization that in? uence research-based change. Eight items loaded onto factor 2 with loadings of 0. 66 to 0. 40. These items are associated with qualities of research and potential outcomes associated with the implementation of research ? ndings. Factor 3 with seven items loading 0. 60 to 0. 41, relates to the nurse’s research skills, beliefs and role limitations. Factor four refers to communication and accessibility of research ? ndings onto which ? ve items loaded 0. 67 to 0. 42.

The four factor groupings comprising setting, nurse, research and presentation, generated in the US study 10 years ago (Funk et al. , 1991b), were similar to groupings that arose from factor analysis in the present study (Table 2). Cronbach’s alphas were calculated for each factor generated. For factors 1–3 the alpha coef? cients were 0. 75, 0. 74 and 0. 70, respectively, demonstrating good reliability. The alpha coef? cient for factor 4 was lower at 0. 54. The total scale alpha was 0. 86, which indicates that the scale can be considered reliable with this sample. Item-total correlations ranged from 0. 1 to 0. 60. Although a low correlation between some items and the total score was evident, deleting any of these items would have resulted in a reduction in reliability of the scale. Relative importance of barrier and facilitator items The percentages of items perceived by nurses’ as great or moderate barriers are summarized in Table 3. The respondents were also given the opportunity to list and rate any additional perceived barriers not included in the questionnaire. About 27% (85) of respondents documented a total of 174 barriers. However, analysis revealed that only 11% (36) of respondents actually identi? d additional barriers. The remainder had reiterated or reworded barrier items already included in the tool. The additional barrier items listed by respondents were grouped into themes, which included funding, organizational commitment, research training, implementation strategy and professional responsibility. The percentages of items perceived by nurses’ as great or moderate facilitators are summarized in Table 4. The respondents were also given the opportunity to list and rate additional perceived facilitators. Eighteen per cent (57) of respondents took the opportunity to record a total of 90 facilitators. Of these, 7. % (24) actually identi? ed additional facilitators whereas the remainder had rephrased or repeated items already included in the tool. Consistent with the themes identi? ed for the additional barriers were funding, organizational commitment, active participation in research 309 19 (6. 0) 252 28 6 10 15 6 (79. 5) (8. 8) (1. 9) (3. 2) (4. 7) (1. 9) 207 (65. 3) 105 (33. 1) 5 (1. 6) O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 A. M. Hutchinson and L. Johnston Table 2 BARRIERS Scale factors and factor loadings US factor groupings Factor loadings Communalities Factor 1 Factor 2 Factor 3 Factor 4

Barrier item Factor 1: Organizational in? uences on research-based change Physician will not cooperate with implementation Administration will not allow implementation The nurse does not feels she/he has enough authority to change patient care procedures The facilities are inadequate for implementation Other staff are not supportive of implementation The nurse feels results are not generalizable to own setting The nurse is unwilling to change/try new ideas Factor 2: Qualities of the research and potential outcomes of implementation The research has methodological inadequacies The literature reports con? cting results The conclusions drawn from the research are not justi? ed The research is not relevant to the nurse’s practice The nurse is uncertain whether to believe the results of the research The research is not reported clearly and readably Statistical analyses are not understandable The nurse feels the bene? ts of changing practice will be minimal Factor 3: Nurses’ research skills, beliefs and role limitations The nurse sees little bene? for self The nurse does not feel capable of evaluating the quality of the research There is not a documented need to change practice The nurse does not see the value of research for practice The amount of research information is overwhelming The nurse is isolated from knowledgeable colleagues with whom to discuss the research There is insuf? cient time on the job to implement new ideas Factor 4: Communication and accessibility of research ? dings Research reports/articles are not readily available Implications for practice are not made clear The nurse is unaware of the research The relevant literature is not compiled in one place The nurse does not have time to read research Setting Setting Setting Setting Setting Setting Nurse 0. 55 0. 52 0. 42 0. 42 0. 34 0. 39 0. 36 0. 73 0. 71 0. 56 0. 54 0. 53 0. 49 0. 43 0. 09 0. 10 0. 06 0. 11 0. 17 0. 30 0. 01 A0. 02 A0. 01 0. 31 A0. 04 0. 19 0. 23 0. 41 0. 09 A0. 04 0. 05 0. 33 0. 02 0. 01 A0. 09 Research Research Research Presentation Research Presentation Presentation

Nurse 0. 46 0. 38 0. 44 0. 43 0. 46 0. 33 0. 33 0. 46 0. 17 0. 11 0. 11 0. 22 0. 27 0. 11 A0. 04 0. 36 0. 66 0. 59 0. 57 0. 55 0. 53 0. 49 0. 47 0. 40 0. 03 0. 12 0. 30 A0. 13 0. 32 0. 18 0. 03 0. 38 0. 00 0. 04 A0. 05 0. 25 0. 07 0. 19 0. 32 A0. 14 Nurse Nurse Nurse Nurse * Nurse Setting Presentation Presentation Nurse Presentation Setting 0. 57 0. 45 0. 35 0. 55 0. 29 0. 31 0. 38 0. 45 0. 47 0. 33 0. 25 0. 31 0. 23 A0. 04 A0. 04 0. 15 0. 05 0. 31 0. 28 0. 01 0. 06 A0. 04 0. 13 0. 22 0. 39 0. 26 0. 14 0. 47 A0. 01 0. 11 A0. 17 0. 00 0. 31 0. 09 0. 3 A0. 14 0. 60 0. 58 0. 57 0. 55 0. 51 0. 42 0. 41 0. 00 A0. 09 0. 16 0. 13 0. 26 0. 04 0. 21 0. 09 A0. 04 0. 15 0. 16 0. 31 0. 67 0. 60 0. 54 0. 45 0. 42 Two items, ‘research reports/articles are not published fast enough’ and ‘the research has not been replicated’, did not load at the 0. 4 level in this analysis. *The item, ‘the amount of research information is overwhelming’ failed to load on any factor in the Funk et al. model. process – experience, strategy to ensure project completion, implementation strategies, and professional attitude.

Discussion The present study generated a four-factor solution with similarities to that produced in the US by Funk et al. (1991b) and in the UK by Closs & Bryar (2001). The ? rst factor comprises characteristics of the organization and re? ects health professional and other resource support for change 310 associated with the implementation of research ? ndings. More broadly, the theme ‘organizational commitment’ identi? ed following analysis of the additional perceived barriers listed by respondents, appears to be associated with this factor.

Organizational commitment, many respondents felt, would facilitate mobilization of resources to promote change. Factor 2 relates to qualities of research and potential outcomes associated with the implementation of research ? ndings. This factor re? ects the nurse’s reservations about reliability and validity of research ? ndings and conclusions, O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 Clinical nursing issues Table 3 BARRIERS Scale items in rank order Barriers to, and facilitators of, research utilization Barrier items The nurse does not have time to read research There is insuf? ient time on the job to implement new ideas The nurse is unaware of the research The nurse does not feel she/he has enough authority to change patient care procedures Statistical analyses are not understandable The relevant literature is not compiled in one place Physicians will not cooperate with the implementation The nurse does not feel capable of evaluating the quality of the research The facilities are inadequate for implementation Other staff are not supportive of implementation Research reports/articles are not readily available The nurse feels results are not generalizable to own setting The amount of research information is overwhelming Implications for practice are not made clear The research is not reported clearly and readably The research has not been replicated The nurse is isolated from knowledgeable colleagues with whom to discuss the research Administration will not allow implementation The research is not relevant to the nurse’s practice The literature reports con? icting results The nurse feels the bene? s of changing practice will be minimal The nurse is uncertain whether to believe the results of the research Research reports/articles are not published fast enough The nurse is unwilling to change/try new ideas The research has methodological inadequacies The nurse sees little bene? t for self There is not a documented need to change practice The nurse does not see the value of research for practice The conclusions drawn from the research are not justi? ed Reporting item as moderate or great barrier (%) 78. 3 73. 8 66. 2 64. 7 64. 1 58. 7 56. 1 55. 8 52 52 50. 8 50. 8 45. 7 45. 5 43. 3 41. 3 41 35 34. 4 34 31. 9 30. 9 30. 6 29. 4 25. 5 23. 3 22. 1 17 13. 8 Item mean score (SD) 4. 06 3. 9 3. 64 3. 51 3. 56 3. 51 3. 41 3. 3 3. 23 3. 16 3. 19 3. 09 3. 07 3. 0 3. 01 3. 16 2. 76 2. 88 2. 67 2. 87 2. 52 2. 58 2. 81 2. 34 2. 85 2. 25 2. 27 1. 9 2. (1. 21) (1. 3) (1. 4) (1. 39) (1. 32) (1. 26) (1. 33) (1. 39) (1. 3) (1. 29) (1. 35) (1. 26) (1. 35) (1. 22) (1. 25) (1. 14) (1. 49) (1. 18) (1. 28) (1. 11) (1. 3) (1. 29) (1. 21) (1. 34) (1. 0) (1. 26) (1. 24) (1. 21) (1. 02) Responding ‘no opinion’ or non-response (%) 0. 9 1. 6 1. 6 0. 9 3. 8 13 7. 6 3. 5 8. 8 6. 3 6. 3 3. 5 6. 9 5 8. 2 26. 1 3. 8 19. 6 4. 4 18. 9 3. 5 4. 7 25. 2 2. 2 32. 5 3. 5 8. 5 1. 6 21 Table 4 Facilitator items in rank order Reporting item as moderate or great facilitator (%) 89. 6 89. 5 84. 8 82. 3 82. 0 81. 4 81. 3 78. 2 Number (%) responding ‘no opinion’ or non-response 8 (2. 5) 6 9 6 10 (1. 8) (2. 8) (1. 8) (3. 2)

Facilitator item Increasing the time available for reviewing and implementing research ? ndings Conducting more clinically focused and relevant research Providing colleague support network/mechanisms Advanced education to increase your research knowledge base Enhancing managerial support and encouragement of research implementation Improving availability and accessibility of research reports Improving the understandability of research reports Employing nurses with research skills to serve as role models Item mean score (SD) 4. 52 (0. 93) 4. 39 4. 21 4. 11 4. 15 (0. 94) (1. 02) (1. 13) (1. 08) 4. 12 (1. 11) 4. 16 (1. 1) 4. 04 (1. 22) 5 (1. 5) 8 (2. 5) 9 (2. 9)

O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 311 A. M. Hutchinson and L. Johnston in addition to bene? ts of use of ? ndings in practice. Factor 3 focuses on characteristics of the nurse. In particular, this factor is associated with the nurse’s beliefs about the value of research and their research skills, in addition to the limitations of their role. The fourth factor is concerned with characteristics of communication. The focus of this factor centres on access to research ? ndings and understanding of the implications of ? ndings. The issues encompassed within this factor re? ect organizational barriers to access, and research presentation barriers.

These factors are congruent with the concepts characterized in Rogers’ (1995) model of ‘diffusion of innovations’, including characteristics of the adopter, organization, innovation and communication, on which the BARRIERS Scale was developed. Two barrier items, ‘research reports/articles are not published fast enough’ and ‘the research has not been replicated’, failed to load suf? ciently onto a factor and were subsequently discarded. Exclusion of these items from the model re? ects their minimal signi? cance in relation to the underlying dimensions of the factors. That these items were ranked 23 and 16, respectively, is not surprising because they become less relevant when there is a perceived lack of time to read research and implement change as re? cted in the top two nominated barriers to research utilization. It is also important to note that over one quarter of respondents selected the ‘no opinion’ option or failed to respond to both of these items, which further suggests their lack of importance to respondents. The majority of respondents in this study rated approximately 40% of the barriers items as moderate or great barriers. This is compared with the majority of nurse clinicians in the US (Funk et al. , 1991a) and nurses in the UK (Dunn et al. , 1997), who rated about 65% of the barrier items as moderate or great barriers. Overall, this group of Australian nurses perceived there to be fewer barriers to esearch utilization than their colleagues in the UK or US, with a mean score of 43. 7% of respondents rating all the barriers as moderate or great. In the UK (Walsh, 1997a) and the US (Funk et al. , 1991a) mean scores of 59. 8 and 55. 7%, respectively, re? ect the proportion of respondents who rated all barriers as moderate or great. Possible in? uences such as time, population, nursing education programmes should be acknowledged when considering these comparisons. Content analysis of the data comprising additional perceived barriers elicited ? ve new themes respondents associated with barriers to research utilization. Revision of the instrument to re? ect the themes identi? d and changes that have occurred over the past 10 years may be warranted to achieve a more valid scale for the setting in which it was used in this study. The addition of items consistent with changes in the availability of technological resources, information availability and use, and education may enhance the content validity of the scale. The ranking of perceived barriers in practice resulting from this study showed considerable consistency with rankings reported in other studies, as previously discussed. The top three barriers reported in 12 other studies fell within the top 10 barriers identi? ed in this study. Furthermore, two of the top three barriers in an additional two studies fell within the top 10 barriers identi? ed in the present study. The barrier item ‘there is insuf? ient time on the job to implement new ideas’ was reported within the top three barriers in 13 studies, including this and another Australian study (Retsas, 2000). When Spearman’s rank order correlation coef? cients were generated to compare the rank ordering of perceived barriers, a strong positive correlation between this and several other studies was evident (Table 5). Whilst acknowledging differences in nursing populations, sample size, sampling methods, response rates, and minor variations in item wording and number, this suggests a large degree of consistency regarding Study Funk et al. (1991a) Funk et al. (1995a) Dunn et al. (1997) Rutledge et al. (1998) Lewis et al. (1998) Kajermo et al. (1998) Retsas & Nolan (1999) Parahoo (2000) Retsas (2000) Closs et al. 2000) Parahoo & McCaughan (2001) Grif? ths et al. (2001) Location USA USA UK USA USA Sweden Australia Northern Ireland Australia UK Northern Ireland UK r 0. 866 0. 779 0. 835 0. 816 0. 879 0. 719 0. 884 0. 837 0. 801 0. 762 0. 799 0. 912 P 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 0. 000 Coef? cient of determination (%) 75 61 70 66 77 52 78 70 64 58 64 83 Table 5 Barrier rank order correlations 312 O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 Clinical nursing issues Barriers to, and facilitators of, research utilization nurses’ perceptions of the relative importance of the barrier items. Marsh et al. 2001) however, caution against international comparisons with the original US data because changes in nursing education and roles, technology, funding and collaboration with other disciplines since then, may invalidate such comparisons. Nonetheless, despite these changes, the ? ndings of the present study have consistencies with not only the US data of 1991 but also more recent studies in the US, UK, Sweden, Northern Ireland and Australia (Table 5). Thus, notwithstanding the increasing momentum of the evidence-based practice movement in recent years, the pursuit of professional status by the nursing profession, the move of nursing education to the tertiary sector, increased access to systematic reviews and research databases, the research– practice gap persists.

In the light of the plethora of research and theoretical literature on the research–practice gap and issues surrounding research utilization, it is of concern that nurses’ perceptions of the barriers to research utilization appear to remain consistent. In particular, issues surrounding support for implementation of research ? ndings, authority to change practice, time constraints and ability critically to appraise research continue to be perceived by nurses as the greatest barriers to research utilization. This raises important questions. Firstly, do such perceptions re? ect the reality of contemporary nursing? Or rather, do they represent unchallenged, traditionally held and ? rmly entrenched beliefs, which are founded on an understanding of nursing in a socio-historic context that is no longer relevant? If such perceptions do, in fact, re? ct the reality of current day nursing practice, despite the changes and progress that have been made in health care and nursing over the last decade, it behoves us, as a profession, to address the issues related to time, authority, support and skills in critical appraisal with conviction and a sense of urgency. Contextual issues including the socio-political environment, organizational culture and interprofessional relations need to be taken into serious consideration when exploring and formulating potential strategies to overcome these barriers. The hospital in which this study was conducted has since undertaken to explore and develop strategies to address and overcome barriers to, and reinforce and strengthen facilitators of research utilization highlighted in the ? ndings. ther studies using the BARRIERS Scale, may re? ect a response bias. That is, nurses with a positive attitude to research may have been more likely to complete the questionnaire. Internal consistency, the extent to which items in the scale measure the same concept (LoBiondo-Wood & Haber, 1998), of the tool was reasonable, although not as high as that reported by Funk et al. (1991b). For seven items, more than 10% of the respondents nominated ‘no opinion’ or failed to respond. Furthermore, this study was conducted in one organization; the ? ndings are therefore context speci? c, which makes it dif? cult to generalize to other settings. However, there is consistency over ime and between countries in regard to nurses’ perceptions of the barriers to research utilization. Conclusion In order to gain an understanding of perceived in? uences on nurses’ utilization of research in a particular practice setting, nurses were surveyed to elicit their opinions regarding barriers to, and facilitators of, research utilization. Many of the perceived barriers to research utilization reported by this group of Australian nurses are consistent with reported perceptions of nurses in the US, UK and Northern Ireland during the past decade. Time was the most important barrier perceived by nurses in this study, which is re? ected by responses to the items, ‘the nurse does not have time to read research’ and ‘there is insuf? ient time on the job to implement new ideas’, resulting in them being ranked as the top two barriers to research utilization. Consistent with this ? nding was the ranking of facilitator item ‘increasing the time available for reviewing and implementing research ? ndings’ as the most important facilitator to research utilization. The employment of qualitative research methods, such as observation and interview, will contribute further to our knowledge about barriers to, and facilitators of, research utilization by nurses by allowing deeper exploration of experiences, perception and issues faced by nurses in the utilization of research in their practice.

Fundamental questions about whether nurses’ perceptions actually re? ect the reality of the current context of nursing need to be further investigated. Future research should also examine issues surrounding the use of time by nurses. Questions exploring how much additional time nurses require in order to read the relevant literature and how nurses can be given more time to implement new ideas, need to be addressed. Issues related to nurses’ perception of their authority to change patient care procedures, the support and cooperation afforded by doctors and others, the facilities and availability of resources, and their skills in critical appraisal, also require further 313 Limitations

Reporting bias associated with the self-report method raises questions about the extent to which the responses accurately represent nurses’ perceptions of the barriers to research utilization. The low response rate achieved in this study, although consistent with response rates reported in several O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 A. M. Hutchinson and L. Johnston exploration. Investigation of the information-seeking behaviour of nurses, the means by which they gain and synthesize new research knowledge and the way in which they apply that knowledge to their decision making, will further contribute to our understanding of the research–practice gap phenomenon.

Measurement of the actual extent of research utilization by nurses in the practice setting presents a major challenge for researchers in this ? eld. Acknowledgements The authors thank Sandra Funk for her permission to use the BARRIERS Scale for the purpose of this study. We wish to acknowledge and thank the nurses who completed the questionnaire. The authors also wish to acknowledge the statistical assistance provided by Ms Anne Solterbeck, Statistical Consulting Centre, Department of Mathematics and Statistics, The University of Melbourne. Contributions Study design: LJ, AMH; data analysis: AMH; manuscript preparation: AMH, LJ; literature review: AMH. References Berggren A. 1996) Swedish midwives’ awareness of, attitudes to and use of selected research findings. Journal of Advanced Nursing 23, 462–470. Carroll D. L. , Greenwood R. , Lynch K. , Sullivan J. K. , Ready C. H. & Fitzmaurice J. B. (1997) Barriers and facilitators to the utilization of nursing research. Clinical Nurse Specialist 11, 207–212. Closs S. J. & Bryar R. M. (2001) The barriers scale: does it ‘fit’ the current NHS research culture? NT Research 6, 853–865. Closs S. J. & Cheater F. M. (1994) Utilization of nursing research: culture, interest and support. Journal of Advanced Nursing 19, 762–773. Closs S. J. , Baum G. , Bryar R. M. , Griffiths J. & Knight S. (2000) Barriers to research implementation in two Yorkshire hospitals.

Clinical Effectiveness in Nursing 4, 3–10. Crane J. , Pelz D. C. & Horsley J. A. (1977) Conduct and Utilization of Research in Nursing Project. School of Nursing, University of Michigan, Ann Arbor, MI. Crichton N. (2000) Information point: principal component analysis. Journal of Clinical Nursing 9, 815. Crichton N. (2001) Information point: factor analysis. Journal of Clinical Nursing 10, 550–562. Dunn V. , Crichton N. , Roe B. , Seers K. & Williams K. (1997) Using research for practice: a UK experience of the barriers scale. Journal of Advanced Nursing 26, 1203–1210. Estabrooks C. A. (1999) Will evidence-based nursing practice make practice perfect?

Canadian Journal of Nursing Research 30, 273–294. Evidence-Based Medicine Working Group (1992) A new approach to teaching the practice of medicine. Journal of the American Medical Association 268, 2420–2425. Funk S. G. , Champagne M. T. , Wiese R. A. & Tornquist E. M. (1991a) Barriers to using research findings in practice: the clinician’s perspective. Applied Nursing Research 4, 90–95. Funk S. G. , Champagne M. T. , Wiese R. A. & Tornquist E. M. (1991b) Barriers: the barriers to research utilization scale. Applied Nursing Research 4, 39–45. Funk S. G. , Champagne M. T. , Tornquist E. M. & Wiese R. (1995a) Administrator’s views on barriers to research utilization.

Applied Nursing Research 8, 44–49. Funk S. G. , Tornquist E. M. & Champagne M. T. (1995b) Barriers and facilitators of research utilization. Nursing Clinics of North America 30, 395–407. Gennaro S. , Hodnett E. & Kearney M. (2001) Making evidencebased practice a reality in your institution: evaluating the evidence and using the evidence to change clinical practice. MCN, the American Journal of Maternal/Child Nursing 26, 236–244. Gould D. (1986) Pressure sore prevention and treatment: an example of nurses’ failure to implement research findings. Journal of Advanced Nursing 11, 389–394. Griffiths J. M. , Bryar R. M. , Closs S. J. , Cooke J. , Hostick T. , Kelly S. Marshall K. (2001) Barriers to research implementation by community nurses. British Journal of Community Nursing 6, 501–510. Hicks C. (1994) Bridging the gap between research and practice: an assessment of the value of a study day in developing research reading skills in midwives. Midwifery 10, 18–25. Hicks C. (1996) A study of nurses’ attitudes towards research: a factor analytic approach. Journal of Advanced Nursing 23, 373–379. Hunt J. (1981) Indicators for nursing practice: the use of research findings. Journal of Advanced Nursing 6, 189–194. Hunt J. (1996) Barriers to research utilization. Journal of Advanced Nursing 23, 423–425. Hunt J. 1997) Towards evidence based practice. Nursing Management 4, 14–17. Kaiser H. (1974) An index of factorial simplicity. Psychometrika 39, 31–36. Kajermo K. N. , Nordstrom G. , Krusebrant A. & Bjovell H. (1998) Barriers to and facilitators of research utilization, as perceived by a group of registered nurses in Sweden. Journal of Advanced Nursing 27, 798–807. Ketefian S. (1975) Application of selected nursing research findings into nursing practice: a pilot study. Nursing Research 24, 89–92. Kirchhoff K. T. (1982) A diffusion survey of coronary precautions. Nursing Research 31, 196–201. Lacey A. (1994) Research utilization in nursing practice: a pilot study.

Journal of Advanced Nursing 19, 987–997. Lewis S. L. , Prowant B. F. , Cooper C. L. & Bonner P. N. (1998) Nephrology nurses’ perceptions of barriers and facilitators to using research in practice. ANNA Journal 25, 397–405. LoBiondo-Wood G. & Haber J. (1998) Nursing Research. Methods, Critical Appraisal and Utilization. Mosby, St Louis, MO. MacGuire J. M. (1990) Putting nursing research findings into practice: research utilization as an aspect of the management for change. Journal of Advanced Nursing 15, 614–620. 314 O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 Clinical nursing issues Marsh G. W. , Nolan M. & Hopkins S. 2001) Testing the revised barriers to research utilization for use in the UK. Clinical Effectiveness in Nursing 5, 66–72. Nolan M. , Morgan L. , Curran M. , Clayton J. , Gerrish K. & Parker K. (1998) Evidence-based care: can we overcome the barriers? British Journal of Nursing 7, 1273–1278. Parahoo K. (2000) Barriers to, and facilitators of, research utilization among nurses in Northern Ireland. Journal of Advanced Nursing 31, 89–98. Parahoo K. & McCaughan E. M. (2001) Research utilization among medical and surgical nurses: a comparison of their self reports and perceptions of barriers and facilitators. Journal of Nursing Management 9, 21–30. Retsas A. 2000) Barriers to using research evidence in nursing practice. Journal of Advanced Nursing 31, 599–606. Retsas A. & Nolan M. (1999) Barriers to nurses’ use of research: an Australian hospital study. International Journal of Nursing Studies 36, 335–343. Rizzuto C. , Bostrum J. , Suter W. N. & Chenitz W. C. (1994) Predictors of nurses’ involvement in research activities. Western Journal of Nursing Research 16, 193–204. Rogers E. M. (1995) Diffusion of Innovations. The Free Press, New York. Rutledge D. N. , Ropka M. , Greene P. E. , Nail L. & Mooney K. H. (1998) Barriers to research utilization for oncology staff nurses and nurse managers/clinical nurse specialists. Oncology Nursing Forum 25, 497–506.

Barriers to, and facilitators of, research utilization Sackett D. L. , Rosenberg W. M. C. , Gray J. A. M. , Haynes R. B. & Richardson W. S. (1996) Evidence based medicine: what it is and what it isn’t. British Medical Journal 312, 71–72. Stetler C. B. (1994a) Problems and issues of research utilization. In Nursing Issues in the 1990’s (Strickland O. L. & Fishman D. L. eds). Delmar, New York, pp. 459–470. Stetler C. B. (1994b) Refinement of the Stetler/Marram model for application of research findings to practice. Nursing Outlook 42, 15–25. Tabachnick B. G. & Fidell L. S. (2001) Using Multivariate Statistics. Allyn & Bacon, Needham Heights, MA.

The Australian Institute of Health and Welfare (1999) National Health Labour Force Series. Number 20 – Nursing Labour Force 1999. The Australian Institute of Health and Welfare, Canberra. Walsh M. (1997a) How nurses perceive barriers to research implementation. Nursing Standard 11, 34–39. Walsh M. (1997b) Perceptions of barriers to implementing research. Nursing Standard 11, 34–37. Walsh M. & Ford P. (1989) Rituals in nursing: ‘we always do it this way’. Nursing Times 85, 26–35. Winter J. C. (1990) Brief. Relationship between sources of knowledge and use of research findings. The Journal of Continuing Education in Nursing 21, 138–140. O 2004 Blackwell Publishing Ltd, Journal of Clinical Nursing, 13, 304–315 315

Read more

A Balanced Review of Strengths and Weaknesses of Learning

Table of contents

Review of Learning in the Panic Zone: Strategies for Managing Learner Anxiety

Introduction

It is generally agreed that research can be divided from different perspectives, such as being grouped into empirical and philosophical research according to whether collecting data or not (Allison, 2012). So does “social research”, which features “focusing on people in a social setting” (Robson, 2011, p. 5) and aims at achieving research purposes of “action, change and emancipation” (Robson, 2011, p. 39).

In terms of research paradigms, “social research” can be divided into “quantitative research” and “qualitative research”, usually the former focusing on collecting numerical data and the latter focusing on collecting data of words (Robson, 2011, p. 5). Thus being aware of different theoretical approaches, researchers become reflexive, creative, and capable of reinvention and evolution (Robson, 2011, p. 41). Also according to Robson, the kind of research “refers to applied research projects which are typically small in scale and modest in scope”, is termed as “real world research” (Robson, 2011, p. 3).

It usually solves “problems and issues of direct relevance to people’s lives” (Robson, 2011, p. 4). And the research under review, which applies strategies into real programs (Palethorpe & Wilson, 2011, p. 420), seems to be this kind of research. In this assignment, I am going to evaluate the article under review from the aspects of strengths and weaknesses and relate the analysis to the broader issues of research. Strengths Firstly, to some extent, this article is formally logical and well-organized by using subheadings and questions like “How do trainers support learners who undertake challenging tasks? (Palethorpe & Wilson, 2011, p. 427). Realizing the “GAP” (Shon, 2012, p. 3) in the literature that few attention has been paid to the positive effect of stress in real cases (Palethorpe & Wilson, 2011, p. 420), the researchers formed their research questions, presented their “RAT” (Shon, 2012, p. 3) and then came the research design and research method, “multi-strategy design” (Robson, 2011, p. 6) and “triangulation” (Cohen, 2007, p. 141) respectively. Seen from the perspective of the research design, it is closely related to previous literature and theory and tries to answer research questions by adopting certain research methods.

Finally, with the conclusion indicating that the theoretical strategies are in accordance with the comfort-stretch-panic model in previous literature and recommending further studies (Palethorpe & Wilson, 2011, p. 435). Secondly, as social research, it is of great value to have a “scientific attitude”: “systematically, skeptically and ethically” (Robson, 2011, p. 15). Specifically, by saying “systematically”, I mean this research is well prepared and arranged by two experienced trainers and consultant, with “over six years’ experience of providing consultancy in training” (Palethorpe & Wilson, 2011, p. 38) and “more than 30 years’ experience in education and training” (Palethorpe & Wilson, 2011, p. 420) separately. So they both have a clear understanding of what, how and why they are doing in the research. They made a detailed exposition of literature, including “theoretical solutions to debilitating learner anxiety” (Palethorpe & Wilson, 2011, p. 421) and “practical measures that a trainer can take to prepare learners for challenging tasks” (Palethorpe & Wilson, 2011, p. 427) and designed the questionnaires in research utilizing the strategies in the literature.

Such a coherent process of research design is sufficient for the first aspect of “scientific attitude”. And by saying “skeptically”, I mean the researchers have recognized its limitations of using a small sample of 30 potential participants and the absence of trainees’ feedback and thus recommended future work of considering the “individual personal differences and how these impact differential responses to stressful situations” (Palethorpe & Wilson, 2011, p. 435), thus “subjecting ideas to possible disconfirmation” (Robson, 2011, p. 5). And finally, “ethically” is represented during the questionnaires, which “were sent only to those who indicated availability to help with the research” (Palethorpe & Wilson, 2011, p. 428). The third part of the advantages focuses on the research method. Combining strategies of survey and interview, it is obvious that this research mainly conducts qualitative research methods. However, it can also be called“triangulation” because of the close connection among literature, survey and interviews.

According to Cohen, “triangulation” may be defined as “the use of two or more methods of data collection in the study of some aspect of human behavior” (Cohen, 2007, p. 141). It is often used to mean “bringing different kinds of evidence to bear on a problem” (Esterberg, 2002, p. 176). Here in the article under review, by saying “triangulation”, the researchers adopted the approach of triangulating literature, survey and interview. According to different kinds of literature, there are many types of triangulation and each has its own characteristics, of which “theoretical triangulation” (Cohen, 2007, p. 42) and “methodology triangulation” (Cohen, 2007, p. 142) are reflected in this research. According to Cohen, the former “draws upon alternative or competing theories in preference to utilizing one viewpoint only” and the other “uses either the same method on different occasions or different methods on the same object of study” (Cohen, 2007, p. 142). Sometimes different theories and results from conducting different methods lead to conflict conclusions, it does not mean the research is wrong, it may indicate the necessity of further study and research in wider field.

So by adopting different types of triangulation, researchers feel more confident about their findings and enhance validity (Cohen, 2007, p. 141). Similar to triangulation, there are also various kinds of validity. The type I will focus on is “concurrent validity” because it is the type enhanced in the article I am evaluating. How does the triangulation ensuring “concurrent validity” (Cohen, 2007, p. 140) is the main concern of this part. “Concurrent validity” is a variation of “criterion-related validity” (Cohen, 2007, p. 40), also called “criterion validity” by Perri and Bellamy, implying “whether the measures are in line with other measures of the same content that are generally accepted as valid in the wider research community” (Perri 6& Bellamy, 2012, p. 92). “To demonstrate this form of validity the data gathered from using one instrument must correlate highly with data gathered from using another instrument” (Cohen, 2007, p. 140). To be specific, in this article under review, the data is collected both from survey and interview with the guidance of a large amount of literature, applying “theoretical triangulation” and “methodology triangulation”, thus the concurrent validity is relatively ensured. As Lancy indicates, “using multiple data sources also allows one to fill in gaps that would occur if we relied on only one source” (Lancy, 1993, p. 20). Last but not the least, the research draws on the advantages of its research designs. According to Robson, social research design can be separated into “fixed design” and “flexible design” (Robson, 2011, p. 5). And the key to distinguish these two designs is whether the procedure and focus of research is fixed or not (Robson, 2011, p. ). However, it should be noticed that there overlaps between them. For example, one specific fixed-designed research could be flexibly influenced by qualitative data. So for those using both qualitative and quantitative data, there come “multi-strategy designs” (Robson, 2011, p. 6). Hereby saying “multi-strategy”, which has a “substantial collection of both qualitative and quantitative data in different phases or aspects of the same project” (Robson, 2011, p. 6), I do not mean that it contradicts the qualitative research method.

It means a research design of combining qualitative and quantitative elements when conducting the qualitative research method. In a narrow sense, the method used in this article should not be called as “multi-strategy” because the qualitative elements account for a larger proportion. However, the researchers take advantage of using both elements. For example, though “there is a tendency for people to over-choose the middle option” (Thomas, 2011, p. 178), the quantitative approach of the “five-point Likert scale” (Palethorpe & Wilson, 2011, p. 29) does help the researchers from the trouble of getting specific data from the abstract description. And for the analysis, evaluation, and interpretation of data and sample, this paper uses “descriptive statistics (methods used to summarize or describe our observations)” (Rowntree, 2000, p. 19) to summarize the sample of research and indicates that future study is needed for “inferential statistics”, which “is concerned with generalizing from a sample, to make estimates and inferences about a wider population” (Rowntree, 2000, p. 1). By using “opportunistic purposive sampling”, the researchers regarded respondents as representatives of “a diverse group of trainers from across the UK with male and female trainers aged between 26-55 years” (Palethorpe & Wilson, 2011, p. 428), one might hold the opinion that using “mechanical methods” (Rowntree, 2000, p. 24) of selecting randomly is a safe way to make an unbiased representative sample, however, “it is conceivable that you could use random methods and still end up with a biased sample” (Rowntree, 2000, p. 25).

So considering the rich experience of the researchers, the “opportunistic purposive sampling” is a better choice to avoid the less representativeness of random sampling. Weaknesses, However, there are some reservations. Firstly, when analyzing the effectiveness of different strategies, it seems that the researchers have not thought about the “control variable”. According to David and Sutton, “control variable” means “a variable that influences the relationship between the independent and dependent variables” (David & Sutton, 2011, p. 11). Though it is a term in mathematical notation, I would suggest using it and adopting control groups in each training program. Otherwise, the variables such as the difference of trainees, trainers and training environment among different programs might influence the validity of data. Maybe this limitation is hard for researchers to avoid because of the fact that the training is not conducted by the researchers themselves. The data are indirectly collected as comments/feedbacks from different trainers.

Thus to some extent, it is really hard to make sure the validity of data in this research since there are so many variables. Moreover, even after adopting control groups and comparing data from several groups in one particular training program, the validity of data is easy to be influenced by uncontrollable variables. Taking interviews, for example, uncontrollable variables could be “characteristics of the interviewers”, “interactions of interviewer/respondent characteristics” and privacy concerns of the respondents (Robson, 2011, p. 241).

Although the researchers have tried to do the best by adopting “semi-structured interview” (Thomas, 2011, p. 164), indicating that “11 respondents were interviewed in a ‘guided’ unstructured format in which participants were allowed a considerable degree of latitude to express their opinions within the interview framework” (Palethorpe & Wilson, 2011, p. 429), they have not excluded the influence of the “framework”. So it is rather difficult for the researchers to ensure the validity of data and to precisely achieve the research purpose.

And another influence about the validity the researchers might not consider well is the representativeness of the sample. Considering that the research mainly focuses on “questionnaire-based surveys” (“Internet surveys” and “interview surveys” specifically) (Robson, 2011, p. 240), which ignores “the characteristics of non-respondents” (Robson, 2011, p. 240), it is doubtable to say that “the sample of respondents is representative” (Robson, 2011, p. 240).

Maybe it is more persuasive to say that “our statistical methodology enables us to collect samples that are likely to be as representative as possible” (Rowntree, 2000, p. 23) rather than “the respondents represented a diverse group of trainers from across the UK with male and female trainers aged between 26-55 years” (Palethorpe & Wilson, 2011, p. 428).

Conclusion

To sum up, this assignment evaluates the strengths and weaknesses of the article under review in the framework of different methodologies and methods.

Within the article, by comparing the positive aspects and problematic areas, it is relatively persuasive for the authors to claim their findings. And the contributions they made by putting the theories into practice are highly appreciated since it is real-world research.

References:

  1. Allison, P. (2012). The source of knowledge: Course introduction. United Kingdom: The University of Edinburgh.
  2. Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education. (6th ed. ). London & New York: Routledge.
  3. David, M., & Sutton, C. D. (2011). Social research: An Introduction. (2nd ed. ). New Delhi: SAGE. 

  4. Esterberg, K. G. (2002). Qualitative methods in social research. United States: McGraw-Hill Companies, Inc.

  5. Lancy, D. F. (1993). Qualitative research in education: An introduction to the major tradition. New York: Longman.

  6. Palethorpe, R., & Wilson, J. P. (2011). Learning in the panic zone: Strategies for managing learner anxiety. Journal of European Industrial Training, 35(5), 420-438.

  7. Perri 6, & Bellamy, C. (2012).

    Principles of methodology: Research design in social science. Croydon: SAGE.

  8. Robson, C. (2011). Real-world research. (3rd ed. ). Cornwall: John Wiley&Sons Ltd.
  9. Rowntree, D. (2000). Statistics without tears: An introduction for non-mathematicians. London: Penguin Group.
  10. Shon, P. C. H. (2012). How to read journal articles in the social sciences. London: SAGE.
  11. Thomas, G. (2011). How to do your research project. London: SAGE.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp