Hypothesis Testing Is Undertaken Health And Social Care Essay

Table of contents

This chapter describes the methodological analysis used in the current survey that examines the relationship between emotional intelligence ( EI ) and burnout among nurses working in private infirmaries in Malaysia. Specifically, this chapter inside informations the research design selected by the research worker, population and sampling, trying processs, and informations aggregation method.

3.2 Research Design

Research design is a maestro program that specifies the methods and processs for roll uping and analysing informations needed for work outing a job ( Zikmund, 2003 ) . Harmonizing to Cooper and Schindler ( 2008 ) , research design is the design for aggregation, measuring, and analysis of informations. Sekaran and Bougie ( 2011 ) stated, ” the research design involves a series of rational decision-making picks associating to the intent of the survey, the type of probe, the extent of research worker intervention, the survey scene, the unit of analysis, the clip skyline, the type of sample to be used every bit good as the measuring, informations aggregation methods, trying design, and informations analysis ” .

This survey used hypothetico-deductive method or a quantitative attack, whereby harmonizing to Creswell ( 2005 ) , a hypothetico-deductive method is ” a type of educational research in which the research decides what to analyze, asks specific, narrows inquiries, collects numeral ( numbered ) informations from participants, analyzes these Numberss utilizing statistics, and conducts the enquiry in an indifferent and nonsubjective mode ” . In short, hypothetico-deductive method is an enquiry into an identified job, based on proving a theory, measured with Numberss and analyzed utilizing statistical techniques. Therefore, the end of hypothetico-deductive method is to find whether the prognostic generalisations of a theory clasp true. This method besides provides a quick, cheap, efficient, and accurate agencies of measuring information about those who are involved in the survey ( Zikmund, 2003 ) .

The intent of this survey was hypothesis testing. Based on the account given by Sekaran and Bougie ( 2011 ), a hypothesis testing is undertaken to explicate the discrepancy in the dependant variable or to foretell organisational results. In this sense, the research worker wanted to cognize the nature of the relationship that may be established between EI and burnout by proving the hypothesis developed. Since the purpose of this survey is to set up a mensurable relationship between EI and burnout, thereby hypothetico-deductive attack is said to be more suited. This method is appropriate to analyze the relationship between variables and to find how much one variable contributes to the anticipation of another ( Creswell, 2005 ; Leedy & A ; Omrod, 2005 ) . Specifically, a hypothetico-deductive study method utilizing a descriptive correlativity design is used to prove the dealingss between all the variables of the survey. The correlativity research design is used when the relationships between variables are non considered causal, and where the relationship between two or more variables is determined ( Salkind, 2003 ). In add-on, hypotheses are developed and tested to depict and explicate the nature of relationship between EI and burnout among the nurses working in private infirmaries.

Sekaran and Bougie ( 2011 ) pointed out that the extent of intervention by the research worker with the normal of work in the workplace has a direct bearing on whether the survey undertaken is causal or correlativity. Since the correlational research was used, the survey was conducted in the natural environment of the organisation with minimum intervention by the research worker with the normal flow or work in noncontrieved scenes. In other words, informations were collected from the nurses involved in their workplace. Researcher did non alter any of the scene of their workplace whereby their day-to-day modus operandi occupations were minimally interfered when research was done. Apart from that, the informations were gathered merely one time. Hence, it was a cross-sectional survey. Figure 3.1 below shows the research design for the relevant survey.

3.3 Population and Sampling

The population of research is ” a group of possible participants to whom you want to generalise the consequences of the survey ” ( Salkind, 2003 ) . Since there are no limited resources such as clip, cost, and human resources, it is non practical and about impossible to study the whole population. On the other manus, trying enables a research worker to garner information rapidly and besides reduces the cost and work force demands for informations aggregation. Sampling besides enables the research worker to do generalisation sing the whole population or parts of the whole population based on a little figure of elements ( Zikmund, 2003 ) . This subdivision discusses the sampling process, including population, unit of analysis, trying frame, trying design, and sample size.

3.3.1 Population, Sampling Frame and Unit of Analysis

The population for the survey consists of the staff nurses employed at three private infirmaries in Klang Valley, Malaysia.

3.3.2 Sampling Design

This research focused on the private infirmaries in Klang Valley. The logic behind choosing the private infirmaries in Klang Valley is that the big and good equipment infirmaries are located in this country and they serve a higher denseness of population. The highest population densenesss are found in Kuala Lumpur, followed by Penang and Putrajaya ( 6,891, 1,490, and 1,478 individuals per square kilometer severally ) for the twelvemonth of 2010 ( Department of Statistics, Malaysia, 2012 ) .  Since the larger private infirmaries are focused in Klang Valley, they have besides higher bed capacity which is assumed to straight interpret to the figure of nurses being hired by these infirmaries. A sum of three private infirmaries in Klang Valley were selected with most of them holding a bed capacity of more than 200. Therefore, it can be assumed that rather a bulk of private infirmary nurses are located in the infirmaries in Klang Valley.

Additionally, the elements that constituted as the sample of the research needed to be selected from the population. The procedure of choosing equal figure of elements from a population is called the sampling design. The major types of trying design include chance and non-probability sampling. In chance sampling, every component in the population has some known opportunities of choice whereas in non-probability sampling, the elements ‘ opportunity of being selected as sample topics is unknown ( Zikmund, 2003 ) . High generalizability of the findings and non being confidently generalizable are two specific features of chance and non-probability sampling severally ( Sekaran & A ; Bougie, 2011 ). This survey utilised chance trying design to choose the single private infirmary staff nurses. The nurses were selected utilizing simple random trying to enable wider generalizability of the findings.

3.3.3 Sample Size

The determination about sample size is non based on a definite reply but depends on a figure of considerations ( Bryman & A ; Bell, 2007 ) . The sample size depends on three factors: ( 1 ) the type of informations analysis ; ( 2 ) the coveted truth of the consequences ; and ( 3 ) the population features ( Neuman, 2003 ) . Harmonizing to Sekaran and Bougie ( 2011 ) , the sample size is governed by the extent of preciseness, assurance desired, variableness in population, cost and clip restraint, and the size of population. The sample size should be big plenty to enable research workers to foretell the population parametric quantities within acceptable bounds. In general, two constituents of a good sample are its adequateness and representativeness. Since an optimum sample size besides helps in minimising the entire cost of trying mistake, hence an appropriate sample size must be chosen.

Sekaran and Bougie ( 2011 ) stated a tabular array suggested by Krejcie and Morgan ( 1970 ) has greatly simplified the sample size determination to guarantee a good determination theoretical account. Since the population of this survey consist of  private infirmary staff nurses as identified earlier, hence based on the tabular array provided by Krejcie and Morgan, the sample size needed was at least, n=xxxx staff nurses.

3.4 Data Collection Method

This subdivision explains the method used for garnering informations. In this survey, secondary information every bit good as primary informations were involved. Secondary informations referred to the diary articles, public records, text editions, or any other information that were available for readings. From these informations, related countries and a figure of informations aggregation methods were studied and the most applicable 1s were chosen. On the other manus, primary informations resulted from a combination of two different set of questionnaires were developed specifically for each of the countries: EI and burnout. These questionnaires were combined along with the selected demographics variables. This subdivision further describes some advantages of carry oning a study utilizing questionnaires. It besides elaborates on each questionnaire that is used to mensurate the forecaster and standard variable of the current survey.

3.4.1 Personally Administered Questionnaires

Survey research workers collect quantitative and numeral informations utilizing questionnaires ( Creswell, 2005 ) . A questionnaire is ” a pre-formulated written set of inquiries to which respondents record their replies within closely defined options ” ( Sekaran & A ; Bougie, 2011 ) . Basically, questionnaires enable efficient informations aggregation when the research worker knows precisely what information is needed and how to mensurate the variables of the survey ( Sekaran & A ; Bougie, 2011 ) . Specifically, this survey used personally administered questionnaire studies method for informations aggregation and the instrument of the survey was developed by incorporating the points applied by the old research workers.

Harmonizing to Sekaran and Bougie ( 2011 ) , the chief advantages of personally administered questionnaires include:

  1. can set up resonance and motivate respondent ;
  2. uncertainties can be clarified on the topographic point ;
  3. less expensive and consumers less clip than questioning when administered to groups of respondents ;
  4. about 100 % response rate ensured and responses could be collected within a short period of clip ; and
  5. namelessness of respondent is high.

To plan a good questionnaire, Sekaran and Bougie ( 2011 ) stated that it is advisable to include some negatively worded inquiries alternatively of give voicing all inquiries positively. Thereby, the inclination in respondents to automatically circle the points towards one terminal of the graduated table is minimized. Nevertheless, in instance this does still go on, the research worker has an chance to observe such prejudice. Hence, both positively and negatively worded inquiries are included in the questionnaire for current research. Apart from that, double-barrelled, equivocal, recall-dependant, prima, and loaded inquiries, every bit good as societal desirableness responses have to be avoided ( Sekaran and Bougie, 2011 ) . The sequence of inquiries should be such that the respondent is led from inquiries of a general nature to those that are more specific, and from inquiries that are comparatively easy to reply to those that are increasingly more hard ( Sekaran and Bougie, 2011 ) .

An full research rests on the measuring instruments, which must be dependable, valid, and appropriate for replying the research inquiry of the survey ( Leeky & A ; Ormrod, 2005 ) . The usage of bing instruments ensures the quality of a study inquiries ( Cone & A ; Foster, 1993 ) . Using bing instruments to build a measuring questionnaire adds proved cogency, dependability, truth, and effectivity from past usage ( Creswell, 2005 ) . Therefore, the research worker adapted self-report bing instruments to mensurate all the concepts of the present research. All the self-report steps are discussed in inside informations in the undermentioned subdivisions.

3.4.1.1 Schutte Self-Report Emotional Intelligence Test ( SSEIT )

This survey utilized the Schutte Self-Report Emotional Intelligence Test ( SSEIT ) ( Schutte et al. ; 1998 ) to measure the EI of the nurses working in the private infirmaries selected. SSEIT was chosen after consideration of several alternate steps of EI, including point EQ-i ( Bar-On, 1997 ) , the ECI ( Boyatzis, Goleman & A ; Rhee, 2000 ) , and the MSCEIT ( Mayer, Salovey, Caruso & A ; Sitarenios, 2003 ) . These steps were non used because they are proprietary and necessitate considerable clip to administrate. On the other manus, SSEIT provides research worker with the ability to hit the informations, does non imply cost for usage of the instrument, and is less time-consuming for the research participants.

The SSEIT which besides referred as the Assessing Emotions Scale is a self-report step that measures EI as defined by Salovey and Mayer ( 1990 ) . Schutte et Al. ( 1998 ) conducted a series of surveies to develop the graduated table and to find its cogency and dependability. A factor analysis of more than 60 points suggested a one-factor solution of 33 points. This one-factor solution resulted in scale points stand foring each of the undermentioned three classs:

  1. assessment and look of emotion in the ego and others ;
  2. ordinance of emotion in the ego and others ; and
  3. use of emotions in work outing jobs.

However, the most widely used subscales derived from the 33-item SSEIT graduated table are based on factors identified by Petrides and Furnham ( 2000 ) , Ciarrochi, Chan, and Bajgar ( 2001 ) , and Saklofske, Austin, and Minski ( 2003 ) . These factor analytic surveies suggested a four-factor solution for the 33 points. The four factors are described as:

  1. Percept of Emotion ( 10 points ) ;
  2. Pull offing Own Emotions ( 9 points ) ;
  3. Pull offing Others ‘ Emotions ( 8 points ) ; and
  4. Use of Emotion ( 6 points ) ( Ciarrochi et al. , 2001 ).

The SSEIT graduated table has been used and validated in several surveies ( Petrides & A ; Furnham, 2000 ; Schutte, Malouff, Bobik, Coston, Greeson, Jedlicka, Rhodes & A ; Wendorf, 2001 ; Schutte, Malouff, Simunek, McKenley & A ; Hollander, 2002 ; Charbonneau & A ; Nocol, 2002 ) . In add-on, an internal consistence analysis with two different samples showed a Cronbach alpha of 0.90 and 0.87 ( Schutte et al. , 1998 ) .

The sample points of this instrument include: ” I find it difficult to understand the gestural messages of other people ” for Perception of Emotion ( PE ) , ” When I am faced with obstructions, I remember times I faced similar obstructions and get the better of them ” for Pull offing Own Emotions ( ME ) , ” I know when to talk about my personal jobs to others ” for Pull offing Others ‘ Emotions ( MOE ) , and ” Some of the major events of my life have led me to re-evaluate what is of import and non of import ” for Utilization of Emotion ( UE ) . The SSEIT was rated on a 5-point Likert graduated table as in the original instrument with responses runing from 1 ( strongly disagree ) to 5 ( strongly agree ) .

3.4.1.2 Maslach Burnout Inventory-Human Service Survey ( MBI-HSS )

Maslach ‘s Burnout Inventory ( MBI ) is normally used as a research tool in the current literature to mensurate the degree of burnout ( Lee, Ashforth & A ; Blake, 1990 ; Kanste, Miettunen & A ; Kyngas, 2006 ; Wu, Zhu, Wang, Wang & A ; Lan, 2007 ) . This survey measured nurse burnout utilizing the 22-item Maslach Burnout Inventory-Human Service Survey, 3rd edition ( MBI-HSS ) ( Maslach et al. , 1996 ) . MBI-HSS steps burnout among employees in human services establishments and wellness attention businesss such as nursing, societal work, psychological science, and ministry in footings of:

  1. Emotional Exhaustion ( 9 points ) ;
  2. Depersonalization ( 5 points ) ; and
  3. Personal Accomplishment ( 8 points ) .

The MBI-HSS has sound psychometric belongingss to guarantee dependability and cogency. MBI has demonstrated to hold concept cogency through the analysis of informations from a innovator instrument of 47 points administered to human service forces ( Maslach & A ; Jackson, 1981a ) . Convergent cogency surveies indicate the MBI-HSS graduated tables measure the same concept as other burnout instruments. Correlations of emotional exhaustion and depersonalisation with other burnout self-report indexs are high ( rs & gt ; .50 ) , where as correlativities with personal achievement are slightly lower ( rs & gt ; .30 ) ( Schaufeli & A ; Enzmann, 1998 ) . Maslach et Al. ( 1996 ) reported internal consistence of MBI with dependability coefficients as follows: I± = .90 for emotional exhaustion ( EE ) , I± = .79 for depersonalisation ( DP ) , and I± = .71 for decreased personal achievement ( PA ) . Furthermore, the test-retest dependability ranged from moderate to high. The test-retest dependability coefficients were as follows: EE, DP, and PA.

Sample points from Emotional Exhaustion ( EE ) subscale include: ” I feel emotionally drained from my work. ” Sample points from Depersonalization ( DP ) subscale include: ” I feel I treat some patients as if they were impersonal objects. ” Sample points from Personal Accomplishment ( PA ) subscale include: ” I can easy understand how my patients feel about things. ” Basically, nurse burnout was measured based on statements that concern feelings or attitudes about one ‘s work and how frequently those feelings occur. The frequence with which the nurses experience each point was measured on a 5-point Likert Scale anchored by Never and Everyday.

3.4.1.3 Demographic Data

It is a affair of pick for the research worker whether inquiries seeking personal information of respondents should look at the beginning or at the terminal of the questionnaire ( Sekaran, Bougie, 2011 ) . Harmonizing to Oppenheim ( 1986 ) , some research workers ask for personal informations at the terminal instead than the beginning of the questionnaire. Their logical thinking may be that by the clip the respondent reaches the terminal of the questionnaire, he or she has been convinced of the legitimacy and genuineness of the inquiries framed by the research worker and, therefore, is more inclined and conformable to portion personal information ( Sekaran, Bougie, 2011 ) . On the other manus, research workers who prefer to arouse most of the personal information at the really beginning may speak up that one time respondents have shared some of their personal history, they may hold psychologically identified themselves with the questionnaire, and may experience a committedness to react ( Sekaran, Bougie, 2011 ) . Both these methods of seeking personal information have their pros and cons. For current survey, demographic information of the respondents were requested on the last subdivision of the questionnaire. This portion includes demographic information such as age, gender, cultural group, matrimonial position, old ages of work experience, making, nursing class, and section. The respondents were required to click the appropriate replies. The study questionnaire can be found in Appendix A.

The two chief instruments selected for this survey have shown concept cogency and dependability based on old surveies but they have non been tested in the Malayan context. Therefore, the dependabilities of all the instruments and content cogency were tested during the pre-test. The sum-up of the questionnaire with the dislocation of subdivisions and the description of each of the survey instruments is shown in Table 3.1.

Read more

Sample design for Blackberry

In sampling, an element is the object (or person) about which or from which the information is desired. In survey research, the element is usually the respondent. A population Is the total of all the elements that share some common set of characterlstlcs. Element: Objects that possess the information the researcher seeks and about which the researcher will make inferences. Population: The aggregate of all elements, sharing some common set of characteristics, that comprise the universe for the purpose of the marketing research roblem.

The researcher can obtain Information about population parameters by taking either a census or a sample. Census: a complete enumaration of the elements of a population or study objects. Sample: A subgroup of the elements of the population selected for participation in the study. sample Large Time available Population size the characteristics Conditions Favoring the use of Factors census Budget Short Large Small small Long Small Variance in Large Cost of sampling error High Cost of nonsampllng errors High Low Nature of measurement Nondestructive Attention to individual cases No

Advantages of Sampling Sampling saves time and money Sampling saves labor. Destructive Yes A sample coverage permits a higher overall level of adequacy than a full enumeration. Complete census Is often unnecessary, wasteful. and the burden on the public. 1) Define the Population: Sampling design begins by specifying the target population, which should be defined in terms of elements, sampling units, extent and time frame. Population/Target population: This is any complete, or the theoretically specified aggregation of study elements. It is usually the ideal population or universe to which esearch results are to be generalized.

Survey population: This is an operational definition of the target population; that is target population with explicit exclusions-for example the population accessible, excluding those outside the country. Element (similar to unit of analysis): This is that unit about which information is collected and that provides the basis of analysis. In survey research, elements are people or certain types of people. Sampling unit: This is that element or set of elements considered for selection in some stage of sampling (same as the elements, in a simple single-stage sample).

In a ulti-stage sample, the sampling unit could be blocks, households, and individuals within the households. Extent: This refers to geographical boundaries. Time frame: The time frame is the time period of interest. In our case; Population/ target population = Blackberry users Survey population = Blackberry users between the age of 18-24, which refers to university students regarding the demographical factors. Elements = Blackberry users who are university students Sampling Unit = Blackberry users in the Business Administration Faculty of Istanbul University. Extent = Business Administration Faculty of Istanbul University

Time Frame = 2 weeks between 4-15 November Given the large size of the target population and limited time and money, it was clearly not TeaslDle to Intervlew tne entlre BlacKDerry users, tnat Is, to take a census. So a sample was taken, and a subgroup of the population was selected for participation in the research. Our sample/ subgroup can be seen above. 2) Determine the Sampling Frame: A sampling frame is a representation of the elements of the target population. To be specific, this is the actual list of sampling units from which the sample, or some stage of the sample, is selected.

It is simply a list of the study population. Sampling frame of our case = List of the students in the Business Administration Faculty of Istanbul University. 3) Select a Sampling Technique: Selecting a sampling technique involves choosing nonprobability or probability sampling. Nonprobability sampling : relies on the personal Judgement of researcher, rather than chance in selecting sample elements. Convenience Sampling: as the name implies, involves obtaining a sample of elements based on the convenience of the researcher. The selection of sampling units is left primarily to the interviewer.

Convenience sampling has the advantages of being both inexpensive and fast. Additionally, the sampling units tend to be accessible, easy to measure, and cooperative. Judgement Sampling: The researcher selects the sample based on Judgement. This is usually and extension of convenience sampling. For example, a researcher may decide to draw the entire sample from one “representative” city, even though the population includes all cities. When using this method, the researcher must be confident that the chosen sample is truly representative of the entire population.

Quota Sampling: introduces two stages to the Judgemental sampling process. The first stage consists of developing control categories, or quotas, of population elements. Using Judgement to identify relevant categories such as age, sex, or race, the researcher estimates the distribution of these characteristics in the target population. Once the quotas have been assigned, the second stage of the sampling process takes place. Elements are selected using a convenience of Judgement process. Considerable freedom exists in selecting the elements to be included in the sample.

The only requirement is that the elements that are selected fit the control characteristics. Snowball sampling: is a special nonprobability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces tne II population.

Kellnooa tnat tne sample wlll represent a good ross section Trom tne Probability sampling: in this kind sampling elements are selected by chance, that is, randomly. The probability of selecting each potential sample from a population can be prespecified. Simple Random Sampling: is the purest form of probability sampling. Each member of the population has an equal and known chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased. Systematic Random Sampling: is often used instead of random sampling.

It is also alled an Nth name selection technique. After the required sample size has been calculated, every Nth record is selected from a list of population members. As long as the list does not contain any hidden order, this sampling method is as good as the random sampling method. Its only advantage over the random sampling technique is simplicity. Systematic sampling is frequently used to select a specified number of records from a computer file. Stratified Random Sampling: is commonly used probability method that is superior to random sampling because it reduces sampling error.

A stratum is a subset of the opulation that share at least one common characteristic. Examples of stratums might be males and females, or managers and non-managers. The researcher first identifies the relevant stratums and their actual representation in the population. Random sampling is then used to select a sufficient number of subjects from each stratum. “Sufficient” refers to a sample size large enough for us to be reasonably confident that the stratum represents the population. Stratified sampling is often used when one or more of the stratums in the population have a low incidence relative to the other stratums.

Cluster Sampling: may be used when it is either impossible or impractical to compile an exhaustive list of the elements that make up the target population. Usually, however, the population elements are already grouped into subpopulations and lists of those subpopulations already exist or can be created. For example, let’s say the target population in a study was church members in the United States. There is no list of all church members in the country. The researcher could, however, create a list of churches in the United States, choose a sample of churches, and then obtain lists f members from those churches. ) Determine the Sample Size: The statistical approaches to determining sample size are based on confidence intervals. These approaches may involve the estimation of the mean or proportion. When estimating the mean, determination of sample size using a confidence interval approach requires a specification of precision level, confidence level, and population standard deviation. In the case of proportion, the precision level, confidence level, and an estimate of the population proportion must De speclTlea. I ne sample size aetermlnea statlstlcally represents ne Tlnal or net sample size that must be achieved.

In order to achieve this final sample size, a much greater number of potential respondents have to be contacted to account for reduction in response due to incidence rates and completion rates. Non-response error arises when some of the potential respondents included in the sample did not respond. The primary causes of low response rates are refusals and not-at-homes. Refusal rates may be reduced by prior notification, motivating the respondents, incentives, proper questionnaire design and administration, and follow- up. The percentage of not-at-homes can be substantially reduced by callbacks.

Adjustments for non-response can be made by subsampling non-respondents, replacement, substitution, subjective estimates, trend analysis, weighting, and imputation. The statistical estimation of sample size is even more complicated in international marketing research, as the population variance may differ from one country to the next. A preliminary estimation of population variance for the purpose of determining the sample size also has ethical ramifications. The Internet and computers can assist n determining the sample size and adjusting it to a count for expected incidence and completion rates.

Sampling distribution: the distribution of the values of a sample statistic computed for each possible sample that could be drawn from the target population under a specified sampling plan. Statistical inference: the process of generalizing the sample results to the population results. Normal distribution: a basis for classical statistical inference that is bell shaped and symmetrical and appearance. Its measures of central tendency are all identical. Standard error: the standard deviation of the sampling distribution of the mean or proportion.

Z values: the number of standard errors in point is away from the mean Incidence rate: the rate of occurrence of persons eligible to participate in a study expressed as a percentage Completion rate: the percentage of qualified respondents to complete the interview. It enables researchers to take into account anticipated refusals by people who qualify Substitution: a procedure that substitutes for nonrespondents other elements from the sampling frame that are expected to respond I rena analysis: a metnoa 0T a0Justlng Tor nonresponaents In wnlcn tne researcner tries to discern a trend between early and late respondents.

This trend is projected to nonrespondents to estimate their characteristic of interest Weighting: statistical procedure that attempts to account for non-response by assigning differential weight to the data depending on the response rate Imputation: a method to adjust for non-response by assigning to characteristic of interest to the nonrespondents based on the similarity of the variables available for both nonrespondents and respondents.

Read more

Computer support specialist

Dad we have fixed this issue before did you not take notes. “, So as I talked him through the steps again we got it working. I have chosen to become a computer support specialist. Computer Support specialist Is a helpful type of Job. It can be very flexible. I got to thinking about the way things were going and wanted to start a real career. I did some research and found this Information maybe someone else could use it to help them.

A computer support specialist helps all types of people with imputer problems. One can chose to help the regular Joe or a big corporation. One would be able to solve issues with the software of a computer. One could help setup a networking system to insure everything in the house is running properly. One can easily do these tasks from home if it were necessary, therefore allowing some extra flexibility. I don’t know about any of you but helping someone out is a great feeling. The expected growth prospects for this field are to be faster than most professions.

Job prospects are expected to increase by around fourteen percent, since computer re more widely utilized both by businesses and Individuals. There will be a greater need for assistance by anyone that utilizes a computer. One can help the common household with a computer slow down Issue. Maybe one has gotten a virus on the computer, a computer support specialist could help to remove the virus or speed up one’s computer. The workplace could be described in different manners depending on the company. One could actually be working in a single office environment. Some workers will be required to do onsite work.

There are also some that even work in heir home. Typically the single office will be divided into cubicles. You will have a computer, phone and other standard office materials. (Bureau of labor Statistics, 2010-2011) A person can expect to earn around $26,580 to $55,990 a year. These numbers will depend on ones level of education and experience in the field. It could also depend on the company one choose to work for. (Bureau of labor Statistics, 2010-2011) A person can get by with a simple certification. Some companies will do specific software training once you start working for them.

Some companies will quire a bachelor degree. A person will have to receive periodic update training. As the computer systems change you will need to keep up with the new programs. (.NET online, 2011) Setting up computer networks will also change. In order to do this job there are several qualities you need to possess. You will need to be able to listen actively. It is also required that you be able to communicate on several levels. There may be several Jobs to complete you will need to be able to manage your time. You will also need to be able to teach others how to understand the process you are doing.

Read more

Improving Math and Science Scores in Middle School

Program Evaluation Improving Math and Science scores in Middle School TABLE OF CONTENTS Page Needs Assessment 3 Program Theory 10 Logic Model 20 Conceptualization & Operationalization of Program Outcomes 24 Assessment of Program Impact 25 References 28 NEEDS ASSESSMENT Math and Science are two subjects which most students at any level approach with trepidation and intense dislike, however, both subjects are integral to cognitive thinking.

Not only will these subjects provide skills that will help students think more clearly, but students will be academically successful throughout their school career, enjoy wider career choices and earn more money after graduation. Therefore, establishing a strong foundation in these subjects is integral to future academic and career success. However, studying these subjects in middle school is even more difficult. Studies have shown that the transition for a student from elementary school to middle school is academically and psychologically difficult. According to Maurice Elias in an article entitled, “Middle School Transition: It’s

Harder Than You Think”, many former elementary school students are not well prepared for the demands of middle school. They need explicit instruction, coaching, and support with regard to organizing time and resources for homework; responding to work that is more challenging and requires more effort; understanding and addressing the varying expectations of teachers in different subject areas; and accomplishing such basic tasks as taking notes and taking tests (Elias, 2001). Unfortunately, this same sentiment resonates today with the New York City Public School system, specifically, middle schools located in low income areas.

The New York City Public School System is struggling with mathematics achievement in the grades beyond elementary school. Over 30% of the city’s elementary and middle school students score at the lowest level of the state mathematics test and only 34% of all students pass that test. The mathematics “problem” seems connected to the third major trend in the data, the low performance of middle and junior high schools in the city. In both Mathematics and English Language Arts, the city’s middle and junior high schools seem to be the weakest link in the system (Domanico, 2002).

Recently, the math state scores were released further underscoring the middle school “math problem” that exists. Results showed that while 75. 3% of students at the elementary level passed successfully only 38. 9% of grade eight students passed (Andreatta, 2006, 11). As such, the intent of this study, based on the aforementioned information, is to evaluate and make recommendations with regard to middle school students in a particular school who have been struggling with both subjects. This study will focus on a middle school, IS 166- George Gershwin School—located in East New York.

The decision to choose IS 166 was dependent on a few factors among which included the fact that the district within which it is located is considered a “virtual educational dead zone” by a Civic Report drafted by the Manhattan Institute for Policy Research (Domanico, 2002). Additionally, after reviewing the New York City Department of Education’s website—which provides an overwhelming amount of information on every public school in the city as well as their progress over recent years in the core subjects—it was found that of the schools within the 19th School District—primarily East New York, IS 166 is one of the worst performing schools.

The school’s poor academic performance is further exacerbated by the outstanding grades displayed by another school in the 19th School District such as IS 409—East New York Family Academy and outside of the district another school MS 114, located in District 2 (Manhattan), whose grades superseded the city’s level as well as the state’s level. The graphs below illustrate how IS 166 performed poorly in the last 2 years on the state Math and Science exams comparatively to other schools, specifically IS 409 in the same district as well other schools in other districts.

The last two graphs will show the difference with a higher performing school such as IS 409 and therefore will confirm why this study is going to be conducted. IS 166- George Gershwin School Math and Science Grades Source: New York Department of Education (Division of Assessment and Accountability—School Report Cards 2005). Definition of the Levels on which the scores for both subjects are based: Level 4—These students exceed the standards and are moving toward high performance on the Regents examination.

Level 3—These students meet the standards and, with continued steady growth, should pass the Regents examination. Level 2—These students need extra help to meet the standards and pass the Regents examination. Level 1—These students have serious academic deficiencies. Source: New York Department of Education (Division of Assessment and Accountability—School Report Cards 2005). The aforementioned graphs showed how poorly IS 166 has performed in the last two years in both Math and Science. In Math, the number of students performing at Levels 3 and 4 has decreased from 22. % in 2004 and 17. 5% in 2005. The number of students tested for Level 3 was only 60 and for Level 4 only 3 of 361 total students. The remaining students, as displayed on the right hand side of the graph, are still at a Level 1 which as noted by the above definition means that they are in grave need of assistance. Therefore, for the purposes of the study, the target population will be defined as “in need” students. Although there has been a slight increase in Science, the results are still less than desirable when compared with other schools in the district and the City.

As seen in the graph, only 14% of the students passed at Levels 3 and 4 in 2004 and by 2005, only 18% were able to pass at the same levels. Therefore, if IS 166 continues on this trajectory, it will continue to be labeled an underperforming school that graduates below average students incapable of performing the basic tasks in both subjects. The goal of the evaluation study is to thoroughly review the problems that exist and hopefully get the school to achieve grades similar to IS 409-East New York Family Academy sometime in the near future as is reflected in the following graphs.

IS 409- East New York Family Academy Math and Science Grades Source: New York Department of Education (Division of Assessment and Accountability—School Report Cards 2005). As noted in the above graphs, IS 409 is performing extremely well at Levels 3 and 4 and has outperformed schools in both math and science in the district (which is truly exceptional given the neighborhood and its history) as well as other City schools. Very few students if any are far below the standard in both subjects.

Moreover, as noted before, other schools such as MS 114 and IS 289 located in District 2 have maintained exceptional scores over the two year period. For 2004 and 2005, MS 114 scored 88% and 81% consecutively in Math and 97% and 91% in Science. IS 289 also scored high grades-for both years in Math, the school displayed 83% and 73% when compared to other schools in the district and city and in Science, they scored 87% and 82%. Other schools in other districts from Queens and Staten Island have also demonstrated solid scores.

This makes designing a program even more of a priority in light of the above referenced comparisons. The study will not focus on the students at all levels in the middle schools but specifically, the eighth grade students destined for high school who have yet to grasp the necessary skills needed to succeed and have been the center of test score analysis over the years. These eighth grade students will be approximately 14 years old but depending on factors such as repeating a grade or special needs, the age may vary from 14-16 years old.

As noted before, they will be identified as “in need” students and the study will attempt to identify the worst performing students by looking not only at grades but possibly contributing factors such as income, special needs, and possible crime involvement. The improvement of Math and Science scores is a gargantuan task which requires a major overall of the school at all levels, however, to begin the following services are needed and they are but not limited to: ?Offering training sessions for the math and science teachers.

The difference between not only IS 409 and other schools in District 2 is that the teachers have more experience, education, and are less likely to be absent more than average. The training sessions will be implemented on weekends or after-school whichever is more convenient for the teachers and will be done prior to establishing an after-school program for the students. The training sessions will allow teachers from higher performing schools an opportunity to impart their techniques for achieving higher grades. Offering a separate informative session for the Principal, Maria Ortega, so that she is more knowledgeable on what is needed to succeed in both areas. In most cases, the principal of a school has a general idea of what is needed in most subject areas, however, if the principal is more involved, informed, and fully comprehends the nuances of the subject matter, then she will be able to make better choices in hiring and understanding the teaching of the curriculum. This is an idea which originated out of reading the case of MS 114 in District 2 which showcases a principal that has not only taught but has written Math books for children.

Also, in IS 289, the principal knows each student individually and is fully acquainted with their needs. ?Offering additional services for children that may range from an after-school program to extending class hours to offering classes on the weekend. One of the schools in District 2 actually has classes that last at least 50 minutes giving students a better opportunity to absorb the material thereby performing better in exams. ?Offering programs that will incorporate the parents as well. Perhaps this will be in conjunction with the after-school program.

As noted, most of the students in this district are from low income families and perhaps some of the parents are in low paying jobs or living on welfare. The parents can take advantage of the program by refreshing themselves with the basic concepts of each subject so that they may assist their children and perhaps help themselves. PROGRAM THEORY In order to address the dire academic situation at IS 166-George Gershwin School, and before implementing an after-school program, it is important to address the issue at the higher levels which means analyzing teaching techniques and more importantly, principal participation.

At the Center for Civic Innovation Luncheon featuring Chancellor Joel Klein held on Thursday, October 5th at the Harvard Club, Chancellor Klein began his speech with an analogy of the leaky roof and the squeaky floor. He stated that there was a school located in uptown Harlem that had a leaky roof and a squeaky floor. One day a repair man came to repair the floor and the custodian stated that the floor cannot be fixed prior to the roof being fixed to which the repairman replied “That’s not my concern, I am just here for the floor”. The Chancellor began his speech with that story to underscore the problems with the NYC Education system.

He believes that everyone wants to fix the underlying problems without addressing the issues at the surface. The Chancellor’s story may be applied to the case of IS 166 and any other school in need of improvement. Many observers and parents are often led to believe that their children are primarily the problem in achieving higher scores and possibly that their children lack the intellect to truly analyze or process the information given to them. However, it is just as important for the heads of the respective schools to be cognizant of what is needed to improve these scores and the principal is just the person to ensure this.

Therefore, before implementing a program, we have recommended that Principal Maria Ortega participate in a briefing session lasting approximately one month in the summer—right after the end of the school year and before the hiring season begins—for at least 4 hours a day, three days a week. According to reports of comprehensive school reforms in Chicago and Louisiana, the schools’ academic success was primarily attributed to the principals in charge and the contributions they made throughout the reforms.

In one report, it stated that “highly effective schools communicated expectations for teachers. The principal was active in working to improve teacher skills; ineffective teachers were let go. ” Moreover, the principals played an important role in four areas a) selection and replacement of teachers; b) classroom monitoring and feedback; c) support for improvement of individual teachers; and d) allocating and protecting academic time (Good et al, 2005, 2207). Therefore, implementing a program or briefing session solely for Principal Ortega would help her improve in all these areas.

Principals, under Chancellor Klein’s tenure, have been given more empowerment opportunities and have more responsibilities to ensure the success of their schools. IS 166 has been categorized as a Title I School In Need of Improvement (SINI) under the No Child Left Behind Act (NCLB) and as such, Principal Ortega has to work harder than ever to improve the english, math, and science scores—subjects that are integral to a student’s academic success. The program we have suggested will illustrate to Principal Ortega that math, in particular, cannot be taught in the traditional manner, that is, using rote.

In fact, the National Council of Teachers of Mathematics (NCTM) advocates the development of an inquiry-based mathematics tradition. Students taught using this tradition are encouraged to explore, develop conjectures, prove, and problem solve (Manswell Butty, 2001, 20). Students are best able to absorb the material in not only math and science but other subjects if the teachers are able to present it in an interesting manner that entails connections to the outside world. Principal Ortega should also be familiarized with the requirements for the exams and then know exactly how the staff should approach student preparation.

She should also ensure that with respect to math, she adheres to the recommendations Lyle V. Jones reiterated in his article entitled “Achievement Trends in Math and Science” in which it was stated: ? Only teachers who like mathematics should teach mathematics ? The chief objective of school mathematics should be to instill confidence ? Mathematics teaching must be based on both contemporary mathematics and modern pedagogy (Jones, 1988, 333). After completion of this program, and hopefully with a better understanding of what is needed to improve the scores at IS 166, the next step would be to address the teaching staff.

As noted, the methodology used is integral to ensuring that the students comprehend, absorb, and analyze the information being disseminated. If they fail to process the information then they will ultimately perform poorly in the state exams and possibly continue to do so throughout high school. We recommend prior to the beginning of the academic year, and the implementation of the after-school program, that teachers, specifically, the math teachers enroll in a summer institute similar to one reported in an article entitled, “Toward a Constructivist Perspective: The Impact of a Math Teacher InService Program on Students”.

The reason being is that the teaching of math more so than science requires certain techniques that are far from the traditional methods that most teachers employ. The summer institute in the report offered participating teachers intensive two-week summer institutes and weekly classroom follow-up during one academic year. Moreover, they received an opportunity to reexamine their ideas about the teaching and learning of mathematics.

During the summer institutes, these teachers experienced mathematics classes in which they were encouraged to construct solutions and ideas and to communicate them to a group. They analyzed student understandings as revealed in interviews and they planned lessons which reflected their evolving ideas about mathematics learning and teaching (Simon and Schifter, 1993, 331). Teachers need to plan their lessons in such a manner as to engage the students so that they may effectively communicate their thoughts or problems with a particular issue.

In fact, after completion of the summer institute, and after the teachers began using their newfound techniques, the results were noteworthy and ranged from students stating that “it’s fun to work math problems” to “I’d rather do math than any other kind of homework” to “I like to explain how I solved a problem”(Simon and Schifter, 1993, 333). Therefore, using the above referenced example, the summer institute that we propose for the math teachers will last approximately three weeks in the summer and it would begin approximately mid-August prior to the beginning of the academic year.

This program would be mandated by the principal and would include veteran staff members as well new ones brought on board. Another factor that teachers have to take into consideration is the population they cater to during the academic year. IS 166 consists of predominantly black and Hipic students residing in East New York and its surrounding environs, thereby, making them not only an “in need” group in terms of grades but an “at risk” group in terms of their backgrounds and predisposition to engage in illicit activities.

Many believe teaching techniques are generic and if they are employed in one school then they may be applicable in another. However, studies have shown that minority children in low income neighborhoods require a different set of techniques employed. According to Manswell Butty, African-American children have further been identified as favoring four learning styles a) person-centered, b) affective, c) expressive, and movement oriented (Butty, 2001, 23).

Therefore, teachers need to use laboratory or group exercises, discussion sessions, or instructional uses of music and the visual and dramatic arts, especially when those pedagogical techniques promote Black students’ greater academic involvement, interest, and performances (Butty, 2001, 23). However, this is not a generalization implying that all minority children respond to this technique but most will probably respond positively. Therefore, teachers must be made aware of the group of children that they are dealing with and ensure that they employ the above referenced techniques to garner success.

In fact, there are Learning through Teaching in an After-School Pedagogical Laboratories (L-TAPL) in California and New Jersey, which not only offer a program for elementary students but also serves as a practice-rich professional development for urban teachers. The program aims to improve the achievement of urban students and the competence of their teachers (Foster et al, 2005, 28). According to the Foster article, numerous studies, policies, and programs have addressed the persistent problem of underachievement among poor urban students and its array of possible causes.

The NCLB links teacher quality to improved student achievement, especially among low-income urban children of color. Consequently, improving teacher quality has become one of the hallmarks of current reform efforts (Foster et al, 2005, 28). These laboratories groom future urban teachers to deal with students similar to the target population at IS 166. And as such, as an alternative to our summer institute, the teachers are free to enroll in the program offered by this lab in New Jersey.

Therefore, taking into account the above referenced studies, improving teacher quality is of utmost importance when taking into consideration the improvement of math and science scores. All of the above has brought us to the most important element of the study establishing an after-school program. Establishing an After-school Program-Resources Funding Under the NCLB Act, Title I schools, such as IS 166 that are listed as Schools In Need of Improvement, have failed to reach student achievement targets that have been set for every school.

This means the school has failed to meet state proficiency level for all students in English Language Arts, Mathematics, Science and/or high schools graduation rate. Schools falling in the above referenced category may be eligible for Supplemental Educational Services (SES). SES include free after-school/weekend remedial help or tutoring services. The SES provision offers providers an opportunity to offer low-income children, who may be struggling in school, extra academic help and individual instruction.

Through SES, innovative leaders and educators can start a new tutoring program or expand an existing one to serve more students (New York City Department of Education). However, instead of using an SES provider—which in some cases the DOE will offer contracts of over a million dollars to provide services to various schools—we will request additional funding that would have been used to acquire an SES provider to establish the after-school program by ourselves with the assistance of The After-school Corporation (TASC).

TASC is renowned for establishing successful after-school programs and have no contract with the DOE and thus, are not labeled SES Providers. In addition to wanting to establish a program using solely school staff, it is important to note, that there have been several complaints about SES providers and most are being investigated either by the Special Commissioner of Investigation for the New York City School District or the Office of Special Investigation and in the best interest of the target population, we have decided to forego those providers.

Therefore, the funding used from SES will be used to offer per session rates for the teachers participating in the program as well as pay for the consultant from TASC. The funding will also be used to acquire additional supplies such as the KidzMath program which is highly popular and is used around the country to get students interested in math and to improve scores. Funding will also be used to secure additional bus transportation from the Office of Pupil Transportation as well as food and refreshments for the children.

Staffing and Facilities The program will be housed in the school recreation room and so there will be no need to rent a facility to do so. The program will be supervised by the TASC consultant who will preferably be someone from the community who is familiar with the target population and can easily relate to their situation. The principal and assistant principal will take turns observing the classes and ensuring that the teachers and participants are abiding by the rules.

The teachers will be eighth grade math and science teachers who deal with the target population on a daily basis and who are familiar with the problems they are experiencing. Additionally, the teachers will be assisted by high school students who are well versed in the subject areas, who have been recruited from neighboring high schools and would like to add an after-school tutoring activity to their resume. Therefore, these students will not be paid but will use the after-school program as a learning experience.

Participants The students participating in the program will be chosen based on their past academic performance in grades six and seven and failure to show any signs of improvement. To reiterate, this program is geared specifically for eighth grade students, ages 14-16 years old, and will begin a month into the beginning of the academic year towards the end of September early October after the students and teachers have settled in the new semester.

Letters will be sent to the parents at the beginning of the academic year notifying them of their child’s progress and advising them that the program is mandatory if they are to improve and move on to high school (the letters will be followed up by phone calls). While the school has no recourse if a student fails to attend even though it has been marked mandatory, offering a voluntary program usually encourages those that are really not in need of it to participate and those that do need it usually don’t.

The parents will be informed of the structure of the program and the fact that transportation will be provided so that their children will be taken home safely after the program. In fact, parents who may not be working full time or at all will be encouraged to observe or participate in another session that will help them to understand what their child needs to improve. The session, which will last as long as the tutoring session, will more than likely be conducted by the assistant principal or a math/science staff member and will give the parent an opportunity to be truly acquainted with the activities being conducted.

This program may also be helpful to them as well as some of these parents lack the basic educational skills that are necessary to obtain a job. Activities and Schedule Based on successful programs in Arkansas, the after-school program we will establish will mirror these successful programs and therefore, the program will entail classes of one and a half hours each day, Monday through Thursday between the hours of 3pm and 4:30pm. Mondays and Wednesdays will be dedicated to math and Tuesdays and Thursdays will be dedicated to science.

The sessions will be divided into 40 minute periods during which the first period will be dedicated to the teacher illustrating the subject material and the second period will be dedicated to the students participating in groups and working together to complete the work presented in the first period. The students will get a ten-minute break during which they will receive refreshments. In the Camden School District in Arkansas, school officials credited the success of the after-school programs to the schools being released from the “Adequate Yearly Progress” (AYP) status under the NCLB Act (Arkansas Advocates for Children & Families, 2006).

Throughout the course of the program, teachers will be encouraged not to utilize the same material or techniques used on a daily basis. The teachers will be reminded that the program is geared towards individuals who have a negative attitude toward the subjects which may be as a result of not only failure to comprehend the material but also the teacher’s emphasis on traditional methods. Therefore, the program will forego any emphasis on memorization, computation, and equation and will focus on modeling and real world problem solving. Engaging in group work, especially in math, has proven to be successful and will be the focus of the program.

According to Jones, group work differs from cooperative learning in its lesser emphasis on the teacher as instructor and its greater dependence on students teaching other students. Moreover, cooperative learning procedures as dependent first on instruction by the teacher, then on practice engaged in actively by members of an established student team (often of four team members), has evidence that supports the efficacy of the approach to elevate not only achievement but also self-esteem, interpersonal effectiveness and interracial harmony (Jones, 1988, 328).

Therefore, the students will work together in groups over the period of the academic year and will be exposed to hands-on experiences, games, and projects. KidzMath should really be a good stimulant and with the assistance of the teachers, the students should be motivated. Teachers will also be encouraged to maintain a weekly progress report which will ultimately be used to assess the program’s progress.

Another aspect of the program would entail having the Principal establishing stronger ties with the community and getting more community leaders involved by dropping by the after-school program to give advice and encouragement to the students. Students are not only stimulated by various activities that are outside of the norm of the regular classroom but are also stimulated by role models or individuals they deem to be successful from their part of the neighborhood.

According to a report done on the Chicago School Reform, the schools that experienced major changes and improvements were led by principals who were strong veteran leaders with good relationships with their local school councils and the community (Hess, Jr. , 1999, 79). Additionally, incentives can also be offered for the students in the program which will encourage their continued participation and potential success and can range from visits to museums or amusement parks if they have showed slight improvements.

While these children who performed poorly are from low income families, and a reduction in poverty rates might have a salutary effect on measured school achievement, according to Lyle V. Jones, the influence of poverty on educational achievement may be ameliorated by introducing school-parent programs to improve academic conditions in the home. After reviewing nearly 3,000 investigations of productive factors in learning concludes that such programs have an outstanding record of success in promoting achievement (Jones, 1988, 327). Explanation of Logic Model

Inputs: consist of the fundamental resources—human and capital—that the program needs in order for it to achieve its goals. These resources consist of funding for per session rates for the teachers, payment for the TASC Consultant, supplies such as KidzMath, transportation, and refreshments. The most important resources needed are the children to whom the program is directed. Activities: Once the fundamental resources are in place, the schedule has been established and the techniques for teaching have been agreed upon, then the after-school program will proceed as planned throughout the academic year.

The sessions will be conducted four days a week, Mondays and Wednesdays, for math and Tuesdays and Thursdays for science lasting 1. 5 hrs each period. The sessions will entail a great deal of group work and collaboration along with potential visits from community leaders and role models. Outputs: Upon implementation of the program, it is important to ascertain if the program is reaching its target population, if the services provided are being done in the manner discussed and if the population are benefiting or if they have any concerns those will be noted throughout the assessment.

This will be done by conducting site visits, performing observations and conducting surveys. Outcomes: If the program is successful in achieving its goals, then the immediate goals will see the students passing their in-class tests and ultimately the state exams—which has been the focal issue with the school and the reason for the Title I status under the NCLB Act. The long-term goals include the participants of the program actually going on to high school and possibly even college. From that point onward, if students succeed in college, they may even pursue challenging careers thereby improving their socio-economic status.

The reason the logic diagram is done in a cyclical manner is to demonstrate that if the program is successful and the students do improve significantly, then the school may be eligible for the same amount or a higher amount of funding which they can use to increase their resources for the input phase for the upcoming academic year. PROGRAM PROCESS Once the program has been implemented, it is important to ascertain if the services are in fact being delivered as planned and if the participants are learning with the teachers employing the new techniques as discussed.

In order to do this, we will conduct an observational study as fashioned from the TASC’s site visit procedures in addition to teacher and parental surveys to see if they have noted any differences in the children participating in the program. This assessment will be done halfway throughout the semester at approximately the end of January which will also coincide with the first set of state exams (students also take these exams towards the end of the academic year-approximately June).

The assessment will begin with a two-person team (my colleague and I) visiting the after-school’s program for two days a week, for a total of two weeks—one day for math and the other for science. The visit will include an interview with the principal and assistant principal (who, as noted before, would have taken turns monitoring the program). There will be 90 minute observations including the 10 minute break to see how the children are behaving and the teachers’ interaction with them accordingly. The assessment will look at three of the five primary factors as fashioned from TASC’s rating on project activities:

Staff-directing relationship-building ?Staff use positive behavior management techniques ?Staff show positive affect toward youth ?Staff attentively listen to and/or observe youth ?Staff encourage youth to share their ideas, opinions, and concerns Staff strategies for skill-building and mastery ?Staff verbally recognize youth’s efforts and accomplishments ? Staff assist youth without taking control ?Staff ask youth to expand upon their answers and ideas ?Staff challenge youth to move beyond their current level of competency ? Staff plan for/ask youth to work together ?Staff employ two or more teaching strategies

Activity content and structure ?The activity is well organized ?The activity challenges students intellectually, creatively, and/or physically ? The activity requires analytic thinking The observers will rate each indicator on a scale from 1 to 5 where 1 meant that the indicator was not evident during the observation period and a 5 meant that the indicator was highly evident and consistent. These ratings will provide a systematic method for the observation team to quantify its observations of the factors that contribute to the possible success of the program (TASC Catalog of Publication and Reports, 2005, 3).

The assessment will also ask teachers to document any changes they have observed in their students’ behavior throughout the program. This will be extracted from a weekly progress report that they were encouraged to write at the commencement of the program. This will give us an idea if the students have made any progress in the eyes of the educators. The last assessment will be done with the parents who will be asked their views of the program. The questions will include but not be limited to: ?Is the program meeting your expectations? ?Do you see any noticeable changes in your child’s progress? Does your child show any more interest in math or science? ?Do you feel you have benefited from observing or partaking in the informative sessions conducted by the principal or staff? ?Are you satisfied with the transportation provided? These questions will receive ratings from 1-5 as noted above and will give us an overall idea of the process of the structure. We can use the results of the assessment to facilitate mid-term improvements before the conclusion of the program. The results can also be used for future improvements should the after-school program enter its second academic year.

CONCEPTUALIZATION AND OPERATIONALIZATION OF PROGRAM OUTCOMES The goal of this study is to determine the impact of an after-school program on improving the scores of low performing eighth grade students in IS 166. Therefore, the hypothesis is eighth grade Math and Science students who have performed below average in state exams are more than likely to improve their grades in both subjects after enrolling and completing the year long after-school program. In this case, the independent variable would be the after-school program and the dependent variable would be the overall improvement in grades.

Independent Variable: After-school Program The after-school program (in this project) may be conceptualized as any academic activity that takes place outside of the mandated school hours that is geared towards the improvement of a child’s academic achievement in a specific subject area. It may be operationalized by examining the responses from the observations conducted in the assessment phase which were based on five primary factors ranging from staff-directing relationship building to staff strategies for skill-building and mastery to activity content and structure.

Under each category there are various indicators which will be rated on a scale from 1 to 5 where 1 is meant that the indicator was not evident during the observation period and a 5 meant that the indicator was highly evident and consistent. Dependent Variable: Overall Improvement in Grades Overall improvement in grades may be conceptualized as a notable or significant increase which may be anywhere from 15-20% in the in-class and state scores. The increase in scores would hopefully translate into passing grades.

Improvement in grades can be operationalized by examining both the in-class and state test scores and comparing both to the previous year’s scores and as such, we can begin to measure some sort of success based on the increase in the scores. It should be noted that while the overall improvement in grades is the primary dependent variable on which the focus is placed, there are other variables that should be taken into account, however, due to the constraints of this paper, they will be mentioned briefly.

They are but not limited to: improvement in student attitudes—that is the effect the after-school program has had on their approach to the subjects. Do the students now have a positive attitude towards the subject after improving their ability to process and analyze the new information provided? Also, there is the parental support aspect which must be taken into consideration. Did the after-school program increase parental awareness, that is, making parents aware of what students need to excel in both subjects? Do parents now know how to assist or provide support for their children in these subject areas?

Assessing Program Impact—Strategy In order to determine if the after-school program had an effect on overall Math and Science scores, a randomized control-group pretest and posttest design will be conducted. (Please note that steps 1-3 would have been done prior to the implementation of the after-school program). The following steps will be followed in order to execute this test: 1) Students will be selected from the eighth grade roster by random methods, specifically, randomly choosing social security numbers from the database. ) The students with social security numbers ending in even numbers will be assigned to the treatment group (X)—the after-school program, while the students with social security numbers ending with odd numbers will be placed in the nontreatment group (Y). 3) An in-class test similar to that given at the state level will be administered to both groups to ascertain their scores—the dependent variable. The scores will be added for both the experimental and control group. 4) After totaling the scores, the experimental phase will begin.

Both groups will be exposed to the same conditions with the exception of the experimental group (X) who will have the experimental treatment—the after-school program for the academic year. 5) After the experimental group has completed the after-school program, both groups will be evaluated again using an in-class test similar to the one given in the pre-testing period. Once again, the scores will be added for both the experimental and control group. 6) The scores between the pre-testing period and the post-testing period will be calculated to establish the difference. ) The difference in the scores will be compared to determine if the after-school program (the treatment) was associated with a change favoring the experimental group over the control group—who did not participate in the after-school program. 8) A statistical test will be used to determine whether the difference in the scores is truly significant—that is, if the difference is large enough to reject the null hypothesis that the difference is simply a chance occurrence. According to Stephen Isaac in his book, “Handbook in Research and Evaluation” nternal validity gains strength with the randomized design because extraneous variables are controlled since they affect both groups equally (Isaac, 1971, 39). To elaborate, extraneous variables such as differential selection is controlled by random selection methods. Maturation and pre-testing effects occur equally for all groups, differential mortality can be assessed for nonrandom patterns, and statistical regression is controlled when extreme scorers from the same population are randomly assigned to groups (statistical regression will occur but it will occur equally with all groups) (Isaac, 1971, 39).

The disadvantages to this design are to be found in the within-session variations during which time the experimental and control groups are tested and treated separately. There may be differences in room conditions, personalities of teachers, or wording of instructions. According to Isaac, the students should be tested individually or in small groups, randomly assigning subjects, times, and places to experimental and control conditions. The effects of any unwanted situational factors are thus randomly distributed among the subgroups, allowing them to be ignored (Isaac, 1971, 39).

Isaac further states that to control for within-session instrument differences, it is necessary also to assign mechanical instruments, teachers, observers and raters to sessions—or preferably to a single session. Ideally, if observers or judges are involved, they should remain unaware of which groups are being used for control or experimental purposes, since they may have subtle biases that could influence their observations. REFERENCES Andreatta, Dave. “Math Concerns Are Adding Up” New York Post, October 12, 2006: 11

Arkansas Advocates for Children & Families (2006). After-school programs in Arkansas: A solution whose time has come. Little Rock, AR author Accessed on 10/29/2006 http://www. arkleg. state. ar. us/data/education/ Birmingham, Jennifer, Pechman, Ellen M. , Russell, Christina A. , and Monica Mielke. “Shared Features of High-Performing After-School Programs: A follow-up to the TASC Evaluation” TASC Catalog of Publications and Reports, November 2005. Accessed on 11/2/2006 Domanico, Raymond. State of the NYC Public Schools 2002” Civic Report-Manhattan Institute for Policy Research. March 2002, # 26. Accessed on 10/16/2006 Elias, Maurice. “Middle School Transition: It’s Harder Than You Think-Making The Transition to Middle School Successful” Middle Matters, Winter 2001: 1-2 Accessed on 10/19/2006 Foster, Michele, Lewis, Jeffrey and Laura Onafowora. “Grooming Great Urban Teachers” Educational Leadership, March 2005, (62) 6 : 28-32. Good, Thomas, L. , Legg Burross, Heidi, and Mary M. McCaslin. Comprehensive School Reform: A Longitudinal Study of School Improvement in One State” Teachers College Record, October 2005, (107) 10: 2205-2226. Hess, Jr. , G. Alfred. “Understanding Achievement (and other) changes under Chicago School Reform” Educational Evaluation and Policy Analysis, Spring 1999, (21) 1: 67-83. Isaac, Stephen (1971). Handbook in Research and Evaluation. San Diego: EDITS Publishers Jones, Lyle V. “Schooling in Mathematics and Science and What Can Be Done to Improve Them” Review of Research in Education, 1988-1989, (15): 307-341. Manswell Butty, Jo-Anne L. “Teacher Instruction, Student Attitudes and Mathematics Performance among 10th and 12th grade Black and Hipic Students” The Journal of Negro Education, Winter-Spring 2001, (70) ? : 19-37. New York City Department of Education 2004-2005 Annual School Reports (Provided by the Division of Assessment and Accountability) Accessed on 10/14/2006 Simon, Martin A. , and Deborah Schifter. “Toward a Constructivist Perspective: The Impact of a Mathematics Teacher InService Program on Students” Educational Studies in Mathematics, December 1993, (25) 4: 331-340.

Read more

Quantitative business problems

One of the most popular ways to solve quantitative business problems is through the systems of equation. However, not all the equations give one solution. Rather, there are some equations that give one solution; some may give no solution, while others may infinite number of solutions. Furthermore, there are a number of ways to solve the equation. These methods include substitution, elimination and elimination through Gaussian matrices.

Challenges encountered were mainly not getting the answers in example two and three after tedious calculations. There are some particular situations for each of the outcomes. If two statements are logically incorrect (x+y cant be equal to 3 and 4 both), it would result in no solution. Thus, if such thing occurs, this means that there is the need to revisit the basic assumptions of the business or there may be some fault in translating the business problem into equations.

There are infinite solutions when two equations say the same thing. Thus, one need to include some other business constraint too, to get the answer.

Bibliography

Barnett, R. A. , Byleen, K. E. , & Ziegler, M. R. (2007). Finite Mathematics for Business, Economics, Life Sciences and Social Sciences. Alexandria, VA: Prentice Hall. Deitz, J. E. , & Southam, J. L. (2008). Contemporary Business Mathematics for Colleges. Mason, OH: South-Western College Pub.

Read more

The Illusion of Transparency in Negotiations

Table of contents

The authors examined whether negotiators are prone to an “illusion of transparency,” or the belief that their private thoughts and feelings are more discernible to their negotiation partners than they actually are. In Study One, negotiators who were trying to conceal their preferences thought that their preferences had “leaked out” more than they actually did.

In Study Two, experienced negotiators who were trying to convey information about some of their preferences overestimated their partners’ ability to discern them. The results of Study Three rule out the possibility that the findings are simply the result of the curse of knowledge, or the projection of one’s own knowledge onto others. Discussion explores how the illusion of transparency might impede negotiators’ success. I most cartoon depictions of negotiators in action (a tiny fraction of the cartoon universe, we admit), negotiators are shown with dialog bubbles depicting their overt comments and thought bubbles revealing their private thoughts. These conventions convey the different levels at which negotiators operate: Some of their wants, wishes, and worries are conveyed to the other side, but some are held back for strategic advantage. Because one task in negotiation is deciding how much information to hold back (Raiffa 1982),

Do negotiators know how well they have conveyed or concealed their preferences? Typically, negotiators know what they have and have not said, of course, so they may generally have a good idea what their partners know about their preferences. But how well calibrated are negotiators’ assessments of what they have conveyed and concealed? We explored one source of potential miscalibration, namely, whether negotiators experience an illusion of transparency, overestimating the extent to which their internal states “leak out” and are known by others (Gilovich, Savitsky, and Medvec 1998). Most research on the illusion of transparency shows that people overestimate their ability to conceal private information. But there is also evidence that people experience the illusion when trying to convey private information.

Individuals who were asked to convey emotions with facial expressions alone overestimated observers’ ability to discern the expressed emotion (Savitsky 1997). Likewise, participants who were videotaped while exposed to humorous material thought they had been more expressive than observers subsequently rated them as being (Barr and Kleck 1995). These findings suggest that, when trying either to conceal or convey information, negotiators may experience an illusion of transparency, overestimating what their partners know about their preferences.

Whether they do so is important, because previous research has shown that the likelihood of (optimal) settlement is often contingent on accurate perceptions of what others know about one’s own preferences (Bazerman and Neale 1992; Raiffa 1982; Thompson 1991). We conducted three different studies to examine whether negotiators experience an illusion of transparency in negotiations. Studies One and Three examined whether novice negotiators trying to conceal their preferences tend to overestimate the likelihood that their negotiation partners would be able to identify those preferences.

Study Two investigated whether experienced negotiators attempting to communicate some of their preferences also succumb to an illusion of transparency. Study Three was also designed to distinguish the illusion of transparency from the “curse of knowledge,” or the tendency to project one’s knowledge onto others (Camerer, Loewenstein, and Weber 1989; Keysar and Bly 1995; Keysar, Ginzel, and Bazerman 1995). Specifically, we examined whether observers who are “cursed” with the same knowledge as the negotiators exhibit the same biases as the negotiators themselves.

Study One Method Twenty-four previously unacquainted Cornell University undergraduates participated in pairs in exchange for course credit. Participants learned that 118 Van Boven, Gilovich, and Medvec The Illusion of Transparency in Negotiations they would complete a negotiation exercise in which they would each represent the provost at one of two campuses of a multi-campus university system. Because of budget constraints, all of the system’s eight social psychologists needed to be consolidated at the two provosts’ universities.

The provosts were to negotiate the distribution of the social psychologists between the two campuses. Participants were informed that some social psychologists were more valuable than others, and that some were more valuable to one campus than the other. These differences were summarized in a report describing the strengths and weaknesses of each psychologist and assigning each a specific number of points. The eight psychologists were among the fifteen most frequently cited in social psychology textbooks (Gordon and Vicarii 1992).

To familiarize participants with the psychologist and his or her expertise, each psychologist was depicted on a 2- by 4-inch laminated “trading card” that displayed a picture of the social psychologist, his or her name, and two of his or her better-known publications. Each negotiator’s most and least valuable psychologists were assigned +5 and –5 points, respectively, and the other psychologists were assigned intermediate values. The experimenter said that all psychologists must be employed at one of the two universities because all were tenured.

The most and least valuable psychologists were not the same for the two negotiators; the correlation between how much each of the eight psychologists was worth to the two participants was . 79. Participants were told that they should conceal their report, which was somewhat different from the other participant’s report. Because pilot testing indicated that many participants were unsure how to negotiate, we showed them a five-minute videotape of a staged negotiation in which two confederates bartered over who would get (or be forced to acquire) each psychologist.

Confederates were shown trading cards actively back and forth. Participants were given as much time as they needed to negotiate, usually about 30 minutes. They were told that several prizes would be awarded at the end of the academic term (e. g. , a $50 gift certificate to the Cornell book store, dinner for two at a local restaurant) and their chance of winning a prize corresponded to the number of points they earned in the negotiation. We asked participants both early in the negotiation (after approximately five minutes) and at the end to name their partner’s most valuable and least valuable psychologists.

At both times, we also asked them to estimate the likelihood (expressed as a percentage) that their partner would correctly identify their most and least valuable psychologists. We pointed out that the probability of correct identification by chance alone was 12. 5 percent. Question order was counterbalanced, with no effect of order in any of our analyses. Negotiation Journal April 2003 119 Results and Discussion Our key analysis was a comparison of participants’ mean estimates to a null value derived from the overall accuracy rate.

Participants can be said to exhibit an illusion of transparency if their estimates, on average, are higher than the actual accuracy rate. As predicted, negotiators overestimated their partners’ ability to detect their preferences, but only after the negotiation was complete (see Table One). Early in the negotiation, individuals slightly underestimated (by 2 percent) the likelihood that their partners would correctly identify their most valuable psychologist and slightly overestimated (by 8 percent) the likelihood that their partners would identify their least valuable psychologist.

Neither of these differences was statistically reliable.  Following the negotiation, participants overestimated the probability that their partners would identify correctly their most and least valuable psychologists by 14 percent and 13 percent, respectively. Both of these differences were statistically reliable. That is, the probability that negotiators overestimated by pure chance how much their partners knew about their preferences is less than . 05 (the t statistics for these two comparisons are 3. 16 and 3. 30, respectively). Negotiators thus experienced an illusion of transparency at the end of the negotiation, overestimating their partners’ ability to discern their preferences. These findings extend earlier research on the illusion of transparency, showing that negotiators believe their inner thoughts and preferences “leak out” and are more discernible than they really are. This result was obtained only during the second assessment, but we do not wish to make too much of this finding. First, it is hardly surprising because, at the time of the initial assessment, most groups had yet to engage in much discussion of specific candidates, and thus there was little opportunity for participants’ references to have leaked out. Furthermore, it was only participants’ estimates of the detectibility of their least valuable psychologists that rose predictably (from 58 to 76 percent) from early in the negotiation to the end — an increase that was highly statistically reliable (t = 3. 78). Their estimates of the detectibility of their most valuable psychologists stayed largely the same across the course of the negotiation (from 69 to 72 percent) and it was only a decrease in identification accuracy (from 71 to 58 percent) over time that led to the difference in the magnitude of the illusion of transparency.

These subsidiary findings may result from the usual dynamics of the negotiation process: Negotiators typically focus initially on the most important issues, postponing a discussion of less important issues or of what they are willing to give up to obtain what they want until later in the negotiation. This would explain why negotiators felt that they had already leaked information about their most important psychologists early in the negotiation, but that a similar feeling of leakage regarding their least important psychologists took longer to develop.

This tendency might also explain why it may have been relatively easy for the negotiators to discern one another’s “top choices” early in the discussion. It may have been harder to do so later on, after the negotiators discussed all of the psychologists and the various tradeoffs between them. Study Two In Study One, participants experienced an illusion of transparency when they were instructed to conceal their preferences from their partners. In many negotiations outside the laboratory, however, negotiators often attempt to communicate rather than conceal their preferences.

In fact, negotiation instructors often advise MBAs and other would-be negotiators to communicate information about their preferences. Do negotiators experience an illusion of transparency when they attempt to communicate rather than conceal their preferences? Past research has shown that people experience an illusion of transparency when trying (nonverbally) to convey thoughts and feelings in settings outside negotiations (Barr and Kleck 1995; Savitsky 1997).

We therefore examined whether negotiators attempting to communicate some of their preferences, whose efforts at communication are not limited to nonverbal channels, would likewise experience an illusion of transparency. Negotiation Journal April 2003 121 As part of a classroom exercise, MBA students in negotiation courses completed a complex six-party negotiation simulation (Harborco, a teaching tool available from the Clearinghouse of the Program on Negotiation at Harvard Law School, www. pon. org). The course emphasized the importance of negotiators communicating some of their preferences to one another in negotiations.

Prior to the Harborco negotiation, students had engaged in numerous other exercises in which their failure to convey information resulted in nonoptimal settlements. To verify that the Harborco negotiators were attempting to communicate information about their preferences, we asked 22 Cornell and Northwestern University MBA students (not included in following study) who had just completed the Harborco negotiation to indicate which strategy they engaged in more: an information-sharing strategy (attempting to communicate their preferences to others), or an information-hiding strategy (attempting to conceal their preferences from others).

Everyone indicated that they used the information-sharing strategy more. We hypothesized that the same psychological processes that lead novice negotiators trying to conceal their preferences to experience an illusion of transparency would also lead experienced negotiators trying to communicate at least some of their preferences to experience a similar illusion. We thus predicted that participants would overestimate the number of other negotiators who could correctly identify their preferences.

Method Two hundred and forty MBA students at Cornell and Northwestern completed the Harborco simulation, negotiating whether, and under what circumstances, a major new seaport would be built off the coast of a fictional city. There were six parties to the negotiation. The negotiator who represented Harborco (a consortium of investors) was most central. A second negotiator, representing the federal agency that oversees the development of such seaports, had to decide whether to subsidize a $3 billion loan Harborco had requested.

The other negotiators represented the state governor, the labor unions from surrounding seaports, the owners of other ports that might be affected by a new seaport, and environmentalists concerned about the impact of a new seaport on the local ecology. The negotiation involved five issues, each with several options of varying importance to the six parties. For each negotiator, points were assigned to each option of each issue. Student performance was evaluated according to the number of points accumulated.

For example, the most important issue to the Harborco representative was the approval of the subsidized loan (worth 35 points for approval of the full $3 billion, 29 points for approval of a $2 billion loan, etc. ); the second most important issue was the compensation to other ports for their expected losses due to the new seaport (worth 23 points for no compensation, 15 points for compensation of $150 million, 122 Van Boven, Gilovich, and Medvec The Illusion of Transparency in Negotiations etc. ).

The Harborco negotiator’s preference order for the five issues was somewhat different from the preference order of the other five negotiators. Participants were given approximately one and a half hours to reach an agreement. They were required to vote on a settlement proposed by the Harborco negotiator at three points during the negotiation: after 20 minutes, after one hour, and at the end. A successful agreement required the approval of at least five negotiators. Any agreement that included the subsidized loan required the approval of the federal agency representative.

The Harborco negotiator could veto any proposal. The dependent measures, collected after the first and final rounds of voting, concerned the Harborco negotiator’s estimates of the other negotiators’ identification of his or her preference order. The Harborco negotiators estimated how many of the other five negotiators would identify the rank ordering (to the Harborco negotiator) of each issue — for example, how many would identify the approval of the loan as their most important issue? We made clear that one negotiator would guess the exact importance of each issue by chance alone.

Meanwhile, each of the other negotiators estimated the issue that was most important to Harborco, second most important, and so on. Predicted and actual number of negotiators able to identify correctly the importance of each issue to the Harborco negotiator after the first and final rounds of voting.

Results and Discussion

Following the first round of voting, the Harborco negotiators overestimated the number of their fellow negotiators able to identify the importance — to them — of all mid-range issues. All these differences were statistically reliable (all ts > 2. 0). Negotiators did not overestimate the number of negotiators able to identify their most and least important issues. Following the final round of voting, Harborco representatives overestimated the number of negotiators able to identify their four most important issues. This overestimation was statistically reliable for the four most important issues (all t > 2. 25), and was marginally reliable with a probability level of . 14 for the least important issue (t = 1. 5). These findings replicate and extend those of Study One and of previous research on the illusion of transparency.

Experienced negotiators who were attempting to convey (rather than conceal) their preferences to other negotiators tended to overestimate the transparency of those preferences. Study Three We contend that negotiators’ overestimation of their partner’s ability to discern their preferences reflects an egocentric illusion whereby negotiators overestimate the transparency of their internal states. An alternative account is that negotiators experience a “curse of knowledge,” overestimating the knowability of whatever they themselves know. Negotiators may thus overestimate the discernibility of their preferences because they cannot undo the knowledge of their own preferences, not because they feel like their preferences “leaked out. ” Studies One and Two provide some evidence against this alternative interpretation because participants did not significantly overestimate their partners’ ability to discern their preferences early in the negotiation — when they were “cursed” with the same knowledge, but had little opportunity for their preferences to leak out.

To provide a more rigorous test of this alternative interpretation, Study Three employed a paradigm in which observers were yoked to each individual negotiator. The observers were informed of their counterpart’s preferences and thus were “cursed” with the same abstract knowledge, but not with the phenomenology of having — and possibly leaking — the negotiators’ preferences. After watching a videotaped negotiation between their yoked counterpart and another negotiator, observers estimated the likelihood that their counterpart’s negotiation partner would identify their counterpart’s preferences.

We expected that observers’ estimates would be lower than actual negotiators’ estimates because observers would not have the experience of their preferences “leaking out. ” The Illusion of Transparency in Negotiations Method Twenty-four previously unacquainted Northwestern University undergraduates participated in pairs in exchange for the opportunity to earn between $4 and $13, based on their performance in the negotiation. Negotiators were taken to separate rooms and given instructions for the negotiation.

The negotiation was similar to that used in Study One, except that it involved a buyer-seller framework, with which we felt our participants would be familiar. Participants learned that they would act as a provost of one of two campuses of a large university system. Because of budget cuts, the larger of the two campuses (the “seller”) needed to eliminate fifteen of its 35 psychology department faculty. Because the fifteen faculty were tenured, they could not be fired, but they could be transferred to the smaller of the two campuses (the “buyer”), which was trying to acquire faculty.

Participants were to negotiate over the fifteen psychologists “in play”; any faculty not acquired by the buyer would remain at the seller’s campus. Participants were given a report that described each psychologist and his or her associated point value. Some of the psychologists had a positive value to buyers and a negative value to sellers, others had a positive value to both, and still others had a negative value to both. Participants were told that they should not show their confidential reports to the other negotiator.

Participants earned 25 cents for every positive point and had to pay 25 cents for every negative point they accumulated. To give buyers and sellers an equal chance to make the same amount of money, we endowed sellers with an initial stake of $10 and buyers with an initial stake of $4. If buyers obtained all nine of the beneficial faculty and none of the four costly faculty (two were worth 0 points) they earned an additional $8, for $12 total. Similarly, if the sellers eliminated all eight costly faculty and retained all five beneficial faculty (two were worth 0 points) they earned $2, for $12 total.

If no agreement was reached, sellers retained all faculty, losing $6, and buyers acquired no psychologists, leaving both with $4. As in Study One, we gave participants laminated trading cards with a picture of each psychologist and two of that psychologist’s better-known works on the back. The fifteen faculty members, although in reality all social psychologists, were arbitrarily divided into the three subdisciplines of social, clinical, and human-experimental psychology. We designed the payoffs so that the sychologist within each discipline who the buyer most wanted to obtain was not the psychologist the seller most wanted to eliminate. To encourage participants to obtain or retain psychologists across the three disciplines, sellers were offered an additional two points if they eliminated at least one faculty member from each discipline, and an additional four points if they eliminated at least two from each discipline. Similarly, buyers were offered an additional two points if they acquired at least one faculty

Negotiation Journal April 2003 125 member from each discipline, and an additional four points if they acquired at least two from each discipline. Thus, maximum earnings for buyers and sellers were $13 (the $12 earned by accumulating all possible positive points, no negative points, plus the $1 bonus). After negotiators understood their task, they were brought together and given as long as they needed to negotiate a division of the fifteen psychologists, usually about 20 minutes.

Afterward, buyers estimated the likelihood (expressed as a percentage) that the seller would correctly identify the psychologists from each subdiscipline who were the most and least important for the buyer to obtain; sellers estimated the likelihood that the buyer would correctly identify the psychologists from each subdiscipline who were the most and least important for the seller to eliminate. Participants were told that the chance accuracy rate was 20% percent. Buyers were also asked to identify the psychologists from each subdiscipline who were the most and least important for the seller to eliminate, and sellers were asked to make analogous judgments about the buyers’ incentive structure.  Twelve pairs of previously unacquainted Northwestern undergraduates were paid $6 and “yoked” to one of the 12 pairs from the negotiation condition — one student matched to the buyer and one to the seller. Participants read the instructions given to their yoked counterpart (either the buyer or seller) in the actual negotiation before viewing their counterpart’s videotaped negotiation.

Participants then made the same estimates as their counterparts in the negotiation condition, identifying the psychologists from each subdiscipline who were most and least important for their counterpart’s negotiation partner to acquire (or eliminate), and estimating the likelihood that their counterpart’s negotiation partner would be able to guess the psychologists in each subdiscipline who were most and least important for their counterpart to obtain (or eliminate). Results Negotiators. As anticipated, negotiators exhibited an illusion of transparency.

As can be see in the left and right columns of Table Two, buyers and sellers overestimated their partners’ ability to identify their most important psychologists by 20 percent — both statistically reliable differences (ts= 3. 58 and 3. 45, respectively). Buyers and sellers also overestimated the likelihood that their partner would be able to identify their least important psychologists by 4 percent and 25 percent, respectively, with only the latter result statistically reliable (t = 4. 34). Control participants.

Control participants displayed a “curse of knowledge,” overestimating the likelihood that their counterpart’s negotiation partner would correctly identify their counterpart’s preferences (compare the center and right columns of Table Two). This was particularly true for 126 Van Boven, Gilovich, and Medvec The Illusion of Transparency in Negotiations those yoked to sellers: They reliably overestimated the likelihood that their yoked counterparts’ negotiation partners would identify their counterparts’ most and least important psychologists by 12 percent and 19 percent, respectively (ts = 2. 58 and 4. 9). Control participants who were yoked to buyers, in contrast, did not overestimate the likelihood that their yoked counterparts’ negotiation partners would overestimate their counterparts’ preferences.

Note:  indicates that the estimated percentage is reliably greater than the corresponding actual percentage, p < . 05 More important, in every case the control participants’ estimates (overall M = 56 percent) were lower than the actual negotiators’ estimates (overall M = 64 percent) — a statistically reliable difference (t = 2. 53). Thus, negotiators overestimated the transparency of their preferences more than yoked observers who were “cursed” with the same knowledge, but did not have the same subjective experience as negotiators themselves.

Discussion The results of Study Three indicate that negotiators’ overestimation of their partners’ ability to discern their preferences stems from both a curse of knowledge and an illusion of transparency. Observers who were provided with the same abstract knowledge as the negotiators — those provided with Negotiation Journal April 2003 127 abstract information about sellers’ preferences at any rate — overestimated the likelihood that those preferences would be detected. However, this effect was not as strong as that found for actual negotiators’ estimates.

Those participants, possessing more detailed knowledge about how it felt to want to obtain some psychologists and avoid others, apparently thought that some of those feelings had leaked out to their partners because they made significantly higher estimates of the likelihood of detection than the observers did. Negotiators experience an illusion of transparency over and above any curse of knowledge to which they are subject. What Does it All Mean? These three studies provide consistent support for an illusion of transparency in negotiations.

Undergraduate students who were instructed to conceal their preferences thought that they had “tipped their hand” more than they actually had (Studies One and Three). Likewise, business students experienced in negotiation who were attempting to communicate information about some of their preferences overestimated how successfully they had done so (Study Three). These results are not due to an abstract “curse of knowledge” because observers who were cursed with the same knowledge as the negotiators did not overestimate the detectibility of the negotiators’ preferences to the same extent as the negotiators did (Study Three).

The illusion of transparency is thus due to the sense that one’s specific actions and reactions that arise in the give-and-take of negotiation — a blush here, an averted gaze there — are more telling than they actually are. These results complement and extend findings by Vorauer and Claude (1998) who examined participants’ ability to estimate how well others could discern their general approach to a joint problem-solving exercise — i. e. , whether they were most interested in being assertive, being fair, being accommodating, and so on.

They found that participants thought their goals would be more readily discerned than they actually were. Their findings, however, appear to reflect a curse of knowledge rather than an illusion of transparency because their participants’ estimates of the detectibility of their own goals were just the same as those made by observers who were simply informed of the participants’ goals. The Vorauer and Claude findings should not be surprising since their participants did not actually engage in face-to-face interaction.

Instead, each participant exchanged notes with a “phantom” other, whose responses were crafted by the experimenters. Without interaction, it is difficult see how an illusory sense of transparency could emerge. Vorauer and Claude’s studies, along with the results of Study Three, suggest that the curse of knowledge can likewise lead to exaggerated estimates of how readily one’s negotiation partner can discern one’s own perspective on the negotiation.

It is important to note that both the illusion of transparency and the curse of knowledge reflect people’s difficulty in getting beyond their privileged information. In the curse of knowledge, this information is abstract knowledge of one’s beliefs, preferences, or goals; in the illusion of transparency, this information is more detailed, phenomenological knowledge of how one feels or how difficult it was to suppress a particular reaction. At one level, then, it may be fair to characterize the illusion of transparency as a special case of knowledge — more detailed and affect-laden — with which one is cursed.

At another level, however, the differences between the two phenomena may be sufficiently pronounced that there is more to be gained by viewing them as distinct. Ultimately, a more complete understanding of the relationship between the curse of knowledge and illusion of transparency must await the outcome of further research. Future research might also further examine the underlying mechanism proposed for the illusion of transparency. Gilovich et al. (1998) attribute the phenomenon to a process much like Tversky and Kahneman’s (1974) anchoring and adjustment heuristic.

When attempting to ascertain how apparent their internal states are to others, people are likely to begin the process of judgment from their own subjective experience. Because people know that others are not as privy to their internal states as they are themselves, they adjust from their own perspective to capture others’ perspective. Because such adjustments tend to be insufficient (Tversky and Kahneman 1974; Epley and Gilovich 2001), the net result is a residual effect of one’s own phenomenology, and the feeling that one is more transparent than is actually the case.

This account suggests that the illusion of transparency should be particularly pronounced when the internal state being assessed is one that is strongly and clearly felt, such as when negotiating especially important issues. In addition, future research might examine the impact of the illusion of transparency on negotiation processes and outcomes. Thompson (1991) has shown that when negotiators have different priorities, negotiators who provide information about their priorities to their partners fare better than those who do not.

The illusion of transparency may lead negotiators to hold back information about their priorities in the mistaken belief that one has conveyed too much information already. By leading negotiators to believe that their own preferences are more apparent than they really are, the illusion of transparency may give rise to the belief that the other side is being less open and cooperative than they are themselves — which may lead each negotiator to hold back even more. The process can thus spiral in the wrong direction toward greater secrecy. Negotiation Journal April 2003 129

It may be advantageous, then, for negotiators to be aware of the illusion of transparency. If negotiators know they tend to conceal less than they think they do, they may open up a bit more and increase their chances of reaching optimal agreements. In other words, knowing that one’s own “thought bubbles” are invisible to others can lead to more successful negotiations.

Because the data for each pair of negotiators are interdependent, all analyses in this and subsequent studies used the dyad (or group) as the unit of analysis. A t statistic is a measure of how extreme a statistical estimate is. Specifically, a t is the ratio of the difference between a hypothesized value and an observed value, divided by the standard error of the sampled distribution. Consider negotiators’ estimates, following the negotiation, that their negotiation partner had a 72 percent chance of correctly identifying their most valuable psychologist. Because, in actuality, egotiators identified their partners’ most valuable psychologist only 58 percent of the time, the difference between the hypothesized value (58 percent) and the observed value (72 percent) is 14 percent. The standard error, in this case, is the standard deviation of the difference between a negotiators’ predicted likelihood and the actual likelihood (the average squared difference between these two scores), divided by the square root of the sample size. In general, t statistics more extreme than 1. 96 are statistically reliable — that is, the probability that the observed difference is due to chance alone is less than .  We also asked negotiators to estimate which subdiscipline was most important to their partner, and to estimate the likelihood that their partner would discern correctly their own preference order vis-a-vis the three subdisciplines. During debriefing, however, participants said they found these questions confusing because they did not parse the 15 faculty according to their subdiscipline, but instead focused on the value of each individual faculty. These responses are therefore not discussed further.

References

  1. Barr, C. L. and R. E. Kleck. 1995. Self-other perception of the intensity of facial expressions of emotion: Do we know what we show? Journal of Personality and Social Psychology 68: 608-618. Bazerman, M. H. and M. Neale. 1992. Negotiating rationality. New York: Free Press.
  2. Camerer, C. , G. Loewenstein, and M. Weber. 1989. The curse of knowledge in economic settings: An experimental analysis. Journal of Political Economy 97: 1232-1253.
  3. Epley, N. and T. Gilovich. 2001. Putting adjustment back in the anchoring and adjustment heuristic: An examination of self-generated and experimenter-provided anchors. Psychological Science 12: 391-396.
  4. Gilovich, T. D. , K. K. Savitsky, and V. H. Medvec. 1998. The illusion of transparency: Biased assessments of others’ ability to read our emotional states. Journal of Personality and Social Psychology 75: 332-346.
  5. Gordon, R. A. and P. J. Vicarii. 1992. Eminence in social psychology: A comparison of textbook citation, social science citation index, and research productivity rankings. Personality and Social Psychology Bulletin 18: 26-38.
  6. Keysar, B. and B. Bly. 1995. Intuitions about the transparency of intention: Linguistic perspective taking in text. Cognitive Psychology 26: 165-208.
  7. Keysar, B. , L. E. Ginzel, and M. H. Bazerman. 1995. States of affairs and states of mind: The effect of knowledge on beliefs. Organizational Behavior and Human Decision Processes 64: 283293.
  8. Raiffa, H. 1982. The art and science of negotiation. Cambridge, Mass. : Harvard University Press. 130 Van Boven, Gilovich, and Medvec The Illusion of Transparency in Negotiations Savitsky, K. 1997. Perceived transparency of and the leakage of emotional states: Do we know how little we show? Unpublished doctoral dissertation, Cornell University.
  9. Thompson, L. 1990. An examination of naive and experienced negotiators. Journal of Personality and Social Psychology 26: 528-544. 1991. Information exchange in negotiation. Journal of Experimental Social Psychology 27: 161-179.
  10. Tversky, A. and D. Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science 185: 1124-1131.
  11. Vorauer, J. D. and S. Claude. 1998. Perceived versus actual transparency of goals in negotiation. Personality and Social Psychology Bulletin 24: 371-385. Negotiation Journal April 2003 131

Read more

Managing Employee Retention

President and CEO of Store24 Bob Gordon along with its CFO Paul Doucette and COO Tom Hart were worried and hence discussing about strategies for increasing store level employees retention. The extent to which the profitability of the stores was related to “people factors” was considered to be an important element. This could be found out by estimating of the actual financial impact based on the tenure of the manager and the crew at a store.

The site-location factors such as population, number of competitors, pedestrian access, visibility, location and timings were also to be considered as key drivers of profitability. However, a huge amount of variation existed with manager and crew tenure. Also the stores in the sample appeared to be widely disbursed, complicating site-location factors. An opinion was to be formed as to whether increasing wages, implementing a bonus program, instituting new training programs, or developing a career development program would be the best course of action. We need to determine whether employee energies performance and how well his/her tenure impacts financial sheets compared to site location factors.

Data Analysis Data given in the case seemed to be insufficient and inappropriate for the purpose of analysis using regression model. Hence, we downloaded the Data Desk file from author’s blog and used it for in-depth analysis. Based on the data we got of all 84 stores, we tried to do multi-regression model with the expected profit as depend variable on Y-axis and all other factors (like MTenure, CTenure, Comp, Pop, Visible, PedCount, 24 hour open or not, and located in residential or industrial area) as independent variables. The findings are summarized below:

Regression of Profitability with Manager & Crew Tenure Inferences: After the regression modeling in the excel sheet, we got R-square statistic as 0.63 which implies that less than half the points fit on the regression line. The p-statistic value corresponding to the MTenure is 0.049 which is within a significance level of 0.05 so the null hypothesis can be accepted and the result is significant. However, CTenure’s p-value 0.809 does not lie within significance of 0.05 so the alternate hypothesis holds and hence the regression result is insignificant. Hence, we can conclude that a strong relationship exists only between profits and MTenure. Regression of Profitability with only the Manager Tenure

Inference: The R square value of 0.61 signifies a better explanatory power than CTenure. The p-value 0.0025 lies within a high significance level of 0.05 so the null hypothesis holds and the regression result is significant. So the tenure of a manager is one of the key drivers of store profitability. From the data analysis, we can infer that the most relevant factor for the profitability of a store is the tenure of a manager. The tenure of crew member was not found to be that a significant factor. The location parameters were also found to be less relevant. The quantitative analysis is also supported by the facts such as in the top 10 profitable stores the mean tenure of the manger was found to be 110.6 months. This is 4.87 times of the tenure of a manager in the bottom 10 least profitable stores where the mean tenure is 22.7 months.

Recommendations How to retain the managers? Store24 has to implement the following to retain the managers Increase wages annually based on performance contribution to sales Implement Bonus program: Create a bonus structure where employees can earn an annual bonus if they meet pre-specified performance goals. Institute comprehensive training programs Deploy career development programs Promote from within whenever possible: Give managers a clear path of advancement. They will become frustrated and may stop trying if they see no clear future for themselves at your company.

Create Open communication between Managers and Top management: Hold regular meetings in which managers can offer ideas and ask questions. Have an open-door policy that encourages them to speak frankly with their Top management without any fear of repercussion. Communicate your business’s mission. Feeling connected to the organization’s goals is one way to keep managers mentally and emotionally tied to your company. Offer ESOPs.

Consider offering stock options for managers who meet performance goals and stay for a predetermined time period, say, three or five years. Implement a well-designed assessment and selection process. Include behavioral assessments and structured behavioral interviewing techniques to increase the likelihood of hiring people who can, and will, do the job at a high level in your environment. Make sure they know what’s expected of them and how they can grow within your company.

Read more
OUR GIFT TO YOU
15% OFF your first order
Use a coupon FIRST15 and enjoy expert help with any task at the most affordable price.
Claim my 15% OFF Order in Chat
Close

Sometimes it is hard to do all the work on your own

Let us help you get a good grade on your paper. Get professional help and free up your time for more important courses. Let us handle your;

  • Dissertations and Thesis
  • Essays
  • All Assignments

  • Research papers
  • Terms Papers
  • Online Classes
Live ChatWhatsApp