It is very easy to confuse people who lack a fundamental educational background in the mathematical sciences by manipulating data derived from flawed “research” protocols (supposedly scientific studies, surveys, etc.) and publishing the results of such manipulations in what appear, on their face, to be scientific journals and academic archives. Many people assume that the mere publication of a set of “findings” in a scientific journal or academic archive is tantamount, in and of itself, to the endorsement of such “findings” by “experts” in various fields, particularly if the journals in question have official-sounding names, and particularly if the articles in which these “findings” are published are indexed by extensive bibliographic references. Most people of good will lack the time and the requisite educational backgrounds to distinguish high quality documentation pertaining to validly constructed scientific research protocols from poor quality documentation, or from documentation pertaining to grossly unreliable or poorly designed research protocols.
This holds true regardless of whether the issue in question lends itself readily to precise and accurate assessment, or whether the issue in question requires some understanding of the context in which it is encountered, and / or some awareness of the limitations that qualify precise and accurate measurements in the field. Psychological research is particularly vulnerable to errors in data analysis, interpretation, and communication of results, and the vast majority of non-professional people lack the skills necessary to distinguish between valid and reliable assessment tools on the one hand, and invalid or unreliable assessment tools on the other hand.
A solid understanding of the concepts of validity and reliability is crucial in the field of psychological assessment. Clinical psychologists and other clinicians dealing with abnormal psychology have at their disposal a number of tools with which to assess and describe such issues as personality, mental illnesses, and psychological disturbances.
The validity of a psychological assessment tool is a measure of the extent to which the tool in question actually measures the characteristic or construct that it is intended to measure. The original Minnesota Multiphasic Personality Inventory (MMPI) is widely regarded as the most extensively researched psychological assessment tool in the world. The original MMPI was released in 1942 by the University of Minnesota, which has the copyright for this tool. This test was revised in 1989 and is now commonly denoted as the MMPI-2; another version reflecting additional revisions (the MMPI Restructured Form or MMPI-RF) is scheduled for release in late 2007. A special version of the MMPI was released in 1992; this version (the MMPI-A) was developed for the purpose of testing adolescents. This particular psychological assessment tool is one of the most widely utilized tests of adult psychopathology. It is also utilized in criminal justice and correctional contexts, and is part of a battery of tests utilized by agencies such as the Secret Service and the FBI for the purpose of psychological evaluation, where it is utilized for the identification of suitable candidates for high-risk public safety positions. The MMPI is also utilized in college and career counseling, in developing substance abuse treatment protocols, and in designing effective treatment strategies for both psychological problems and medical problems (e.g. chronic pain management). A huge body of literature exists in which the validity of this tool has been discussed, and the vast majority of clinicians and other professionals who utilize this tool consider it to be invaluable in terms of both validity and reliability.
The reliability of a psychological assessment tool is a measure of the extent to which the tool in question yields results which are stable across time. If repetition of the test yields similar results for the test subjects with each repetition, then the test in question is reliable. The MMPI is so widely used precisely because it has been found to be both valid and reliable. This particular tool measures various psychological attributes with a high degree of validity, i.e. it accurately measures what it is intended to measure. It is also reliable, in that repetitions of this test on the same subjects across extended periods of time yield similar results with each repetition.
The Rorschach Inkblot Test (RIT)is the second most widely utilized test in personality assessment. The subject, or testee, is shown a total of 10 symmetrical inkblots and is asked to describe what he or she sees in each inkblot. The subject’s responses are noted. Everything that the subject says and does is captured by the tester, who interprets the results with the aid of a scoring system referred to as the Exner scoring system, or Comprehensive system. This scoring system includes frequency tables which show how often specific responses to each inkblot are given by the general population. This scoring system includes scales for Form Quality, Deviant Verbalizations, Complexity, Human Figure, Organizational Activity, and Overall Total Responses (these are just a few of the scales). Some of these scales have been shown to correlate reasonably well with intelligence, across different testers. For example, Overall Total Responses (abbreviated as the R scale) correlates highly with intelligence; there is definitely a correlation between the R scale and intelligence, with high values of R correlating with higher intelligence. However, high values of R also correlate with higher values on some of the scales which indicate psychopathology. The overall validity of the Rorschach Inkblot Test is most controversial, as is its reliability. Intuitively, this is easy to understand; a degree of subjectivity is unavoidable when administering and scoring this test, regardless of who performs the test and how often the test is performed. Furthermore, the 10 inkblots comprising the test materials were leaked in print in 1983 and were distributed on the Internet in 2004, enabling potential testees to “rehearse” their answers, particularly under circumstances in which this test is administered for diagnostic purposes within the criminal justice system (granting parole, assigning custody, etc.).. It is accurate to state that both the validity and the reliability of this test are questionable, particularly when compared to the validity and reliability of the MMPI.
Most readers of this blog will have stumbled across this information for the first time right here, in this post, above. Those readers to whom both of these tests have been administered will probably be quite surprised to learn that one of these tests is considered to be much more reliable and much more valid than the other. This is not the fault of the reader; it is mentioned merely to underscore the extent to which ordinary men and women place considerable faith in the protocols utilized by clinicians and diagnosticians in the field of psychological assessment, particularly as pertains to the diagnosis of mental disorders.
The hard right in the US knows full well that the majority of people lack scientific backgrounds and can therefore be mislead by assertions made by “scientists” and “researchers,” particularly when dealing with controversial issues such as the rights of gay people and the manner in which gay people lead their lives. It is therefore very easy to mislead people by publishing the results of “research” in journals which have authoritative-sounding names. Tapping in to such ignorance and presumptions is precisely what anti-gay organizations (such as “Focus on the Family” (FOTF), the “American Family Association” (AFA), the “Traditional Values Coalition” (TVC), and the “Family Research Council” (FRC)) engage in with respect to shaping public policy and public opinions of gay people in the US. The “researcher” most frequently cited by these organizations in their attempts to portray gay people as depraved, diseased, uncaring, and immoral is a man named Paul Cameron.
Cameron was born in 1939 in Pittsburg, PA. He received his B.A. from Los Angeles Pacific College in 1961 and went on to obtain his M.A. from California State University, Los Angeles, in 1962. Cameron then obtained his Ph.D. from the University of Colorado in 1966, submitting a dissertation titled “Age as a determinant of differences in non-intellective psychological functioning.” He was affiliated with several colleges and universities until 1980; these institutions included Wayne State University (1967 – 1968), University of Louisville (1970 – 1973), Fuller Theological Seminary (1976 – 1979), and the University of Nebraska (1979 – 1980). In 1982, Cameron founded an organization named the “Institute for the Scientific Investigation of Sexuality” (ISIS), which is now known as the “Family Research Institute” (FRI) (Cameron is Chairman of this organization).
The FRI was formed following an unsuccessful attempt by the Lincoln, NE City Council to pass an ordinance which would have prohibited employment discrimination on the basis of sexual orientation. Cameron headed up an organization named the “Committee to Oppose Special Rights for Homosexuals,” which led the opposition to the proposed measure. During his campaign to defeat this measure, Cameron delivered a speech at the Lutheran chapel of the University of Nebraska, in which he stated that a four-year-old boy had been brutally sexually assaulted by a gay man at a local shopping mall. In fact, the police were unable to confirm that any such attack had occurred, and Cameron has since admitted that he had heard (and repeated) this accusation as a mere rumor.
The mission statement of the FRI declares that the FRI has “…one overriding mission: to generate empirical research on issues that threaten the traditional family, particularly homosexuality, AIDS, sexual social policy, and drug abuse. This organization further seeks “"...to restore a world where marriage is upheld and honored, where children are nurtured and protected, and where homosexuality is not taught and accepted, but instead is discouraged and rejected at every level." The FRI moved from Lincoln, NE to Washington, DC, and then to Colorado Springs, CO, where it remains active and continues to generate anti-gay propaganda.
Cameron is a tireless crusader who is utterly determined to portray the gay community as a threat to public health, a danger to small children, and a scourge to civilization itself.
Ordinarily, ad hominem observations are useless when debating issues of fact, and tend to undermine the credibility of the person who makes them. However, when such observations bear directly on the credibility of a person who assumes a self-appointed role as guardian of the public health and welfare, then it is entirely reasonable to make reference to such observations, particularly when the person in question casts aspersions on the credibility of the group that he or she attacks. Bearing this in mind, the following observations should be made relative to Cameron and his relationships to professional bodies and peers.
Cameron describes himself as a “Researcher / Clinician” on his resume. However, Cameron was only permitted to practice psychology in the State of Nebraska, and his license to practice psychology in that state is currently listed as “Inactive” on the Web site of the Nebraska Department of Health and Human Services (see http://www.nebraska.gov/LISSearch/search.cgi, where you can perform a search for his credentials). His license (#100334) lapsed into “Inactive” status effective January 2, 1995. Cameron is therefore not a licensed clinician, and his continued references to himself as a clinician are therefore flat-out lies.
Cameron was expelled from the “American Psychological Association” (APA) in December 1983, after ethics charges were brought against him in response to his misrepresentation and distortions of the results of studies performed by other psychologists working at the University of Nebraska. Cameron insists that he resigned from the APA – however, the APA’s bylaws make it clear that a member of the APA may not resign during the course of an ethics investigation of that member. The APA formally expelled Cameron on December 2, 1983, stating that “Paul Cameron (Nebraska) was dropped from membership for a violation of the Preamble to the Ethical Principles of Psychologists.” Cameron has gone to elaborate and embarrassing lengths to explain this away on his Web site – however, his formal expulsion from this body stands.
The “Nebraska Psychological Association” (NPA) adopted a resolution at its membership meeting on October 19, 1984, stating that this organization “formally disassociates itself from the representations and interpretations of scientific literature offered by Dr. Paul Cameron in his writings and public statements on sexuality.” The NPA went on to state that “…the Nebraska Psychological Association would like it known that Dr. Cameron is not a member of the Association. Dr. Cameron was recently dropped from membership in the American Psychological Association for a violation of the Preamble to the Ethical Principles of Psychologists” [emphasis added].
In 1985, the “American Sociological Association” (ASA) adopted a resolution declaring that “Dr. Paul Cameron has consistently misinterpreted and misrepresented sociological research on sexuality, homosexuality, and lesbianism," also noting that "Dr. Paul Cameron has repeatedly campaigned for the abrogation of the civil rights of lesbians and gay men, substantiating his call on the basis of his distorted interpretation of this research." This resolution formally charged an ASA committee with the task of "critically evaluating and publicly responding to the work of Dr. Paul Cameron."
In August 1986, the ASA accepted the committee’s report and adopted the following resolution: “The American Sociological Association officially and publicly states that Paul Cameron is not a sociologist, and condemns his consistent misrepresentation of sociological research. Information on this action and a copy of the report by the Committee on the Status of Homosexuals in Sociology, "The Paul Cameron Case," is to be published in Footnotes, and be sent to the officers of all regional and state sociological associations and to the Canadian Sociological Association with a request that they alert their members to Cameron’s frequent lecture and media appearances.”
Cameron’s shameful abuse of the public trust has been noted by organizations outside of the US. In 1996, the Board of Directors of the “Canadian Psychological Association” (CPA) released a position statement denouncing Cameron’s work and distancing the CPA from Cameron’s “findings,” stating that Cameron had “consistently misinterpreted and misrepresented research on sexuality, homosexuality, and lesbianism.”
It is difficult to find any contemporary figure in the human sciences who has been denounced by so many well-respected and prestigious organizations, including the largest professional organization of psychologists in the US (the APA). However, criticism of Cameron and his methodology has not been confined to statements made by professional organizations. In 1985, US District Court Judge Jerry Buchmeyer subjected Cameron to a blistering tongue-lashing. Judge Buchmeyer, presiding over proceedings pertaining to the constitutionality of the Texas “homosexual conduct” statute, concluded that “…Dr. Paul Cameron...has himself made misrepresentations to this Court" and that "[t]here has been no fraud or misrepresentations except by Dr. Cameron" (see Baker v. Wade (1985) (p.536)).
Undaunted by these criticisms of both his integrity and his methodology, Cameron went on to participate in the now-notorious “gay obituary” study, the results of which purported to show that gay men and lesbians have much shorter lifespans than heterosexual men and women. In 1994, Cameron and his associates counted obituaries published by the gay press in gay newspapers and periodicals, and used this data to estimate the lifespans of gay men and lesbians. This is a textbook case revealing the deficiencies associated with drawing conclusions from a convenience sample as opposed to a representative sample.
A representative sample is precisely what the name implies: it is a sample from a population that is representative of the entire population. When a doctor performs blood tests, e.g. for the diagnosis of an infection, the doctor does not drain all of the blood from a patient’s body in order to determine the white blood cell count (WBC) and the presence or absence of antibodies. Instead, the doctor takes one or more test tubes and fills those test tubes with blood drawn (usually) from a vein. The doctor then performs the necessary tests against these blood specimens. This is methodologically sound because blood drawn from a vein in the arm is very similar to blood drawn from a vein in the foot; in most cases, the WBC will be the same regardless of from where the blood was drawn. This is, in other words, a representative sample of the patient’s blood.
Now consider obituaries published in gay periodicals and newspapers. These obituaries are hopelessly unrepresentative of the populations in question (the entire gay and lesbian population). Reasons for this lack of adequate representation are directly attributable to the following observations:
Most gay community newspapers do not have sections of death notices. As the AIDS epidemic began to claim the lives of so many gay men during the 1980s, however, many (but certainly not all) gay newspapers and periodicals began to publish obituaries. These obituaries are usually compiled by, and submitted by, close friends and relatives of the deceased (exceptions to this occur in those cases when the deceased is a public figure, or an influential figure in gay politics, in which cases obituaries are frequently prepared by organizations seeking gay equality in the US). In the vast majority of cases (those cases where the deceased is not a public figure), obituaries only appear in gay community newspapers and periodicals if (1) a loved one or friend of the deceased notifies the newspaper of the death of the deceased, often after preparing an obituary for the deceased, and (2) the editor of the newspaper or periodical in question decides to print the obituary.
Thus, most gay men and lesbians do not have their deaths written up in obituaries published in the gay media. The following is a list (by no means exhaustive) of the groups of gay men and lesbians who, upon passing away, are unlikely to have obituaries printed in the gay media:
· Gay men and lesbians who are not involved in the gay community (men and women who are not activists or outspoken contributors to the politics of gay equality);
· Gay men and lesbians who are closeted, i.e. not open about their sexual orientation. Sadly, this reflects a large percentage of the overall gay population in the US;
· Gay men and lesbians whose families do not wish for the sexual orientation of the deceased to be made a matter of public record;
· Gay men and lesbians whose families or significant others simply do not consider sending obituaries to the gay press;
· Gay men and lesbians whose families or significant others did not send in obituaries for other reasons (shock and grief can prevent a gay-supportive family or circle of friends from thinking about sending obituaries to the gay press
· Gay men and lesbians who die without leaving loved ones to write obituaries for the deceased, e.g. gay people whose loved ones die before them).
An accurate estimate of the lifespans of gay men and lesbians would have to include the lifespans of people from all of the above groups even to approach adequate and accurate representation of the average ages of death of gay men and lesbians. Furthermore, this “research” is fatally flawed in another, important respect: it is by its nature a retrospective analysis of lifespans, where a prospective analysis would be much better suited to accomplishing the task in question. A prospective study would require the selection of groups of heterosexual and gay men and lesbians (at least four groups in total – gay men, lesbians, heterosexual men, and heterosexual women) carefully chosen to eliminate confounding variables such as socio-economic status, congenital illnesses (which have no bearing on sexual orientation), access to healthcare, differences in schooling and education, etc.). Retrospective studies, whilst useful, are flawed in that they cannot, even under the best of circumstances, yield results as meaningful as those yielded by prospective studies. For example, when assessing the efficacy of anti-retroviral medications, it is almost always necessary to identify a control group and an experimental group, members of which both of which have to be matched for such factors as prior exposure to specific anti-retroviral drugs, comparable viral loads, comparable clinical presentation, etc. Only when the efficacy of the drug in question is established by observing and documenting improvements in clinical outcomes, or improvements in terms of lower viral load, higher CD4 counts, etc. can the experimental drug be said to be effective as an addition to existing treatment regimens. Prospective studies, however, are beset with ethical problems – many doctors regard it as immoral to maintain patients on the non-experimental protocol, for which reason patients receiving the non-experimental protocol are frequently granted access to the experimental drug as soon as the improved outcome of utilizing that drug in combination with those already prescribed has been established.
In short, Cameron’s “obituary studies” are utterly worthless in terms of predicting and comparing the lifespans of gay people versus heterosexual people. Cameron has a Ph.D. – he is not a naïve fool. The poor quality of his analysis and the highly selective nature of the “convenience sample” in question leads inevitably to the inference that Cameron conducted his “obituary study” not for the purposes of the dispassionate analysis and the advancement of legitimate scholarship, but for the purpose of generating “empirical data” for the purpose of “restor[ing] a world where marriage is upheld and honored, where children are nurtured and protected, and where homosexuality is not taught and accepted, but instead is discouraged and rejected at every level.”
During 1983 and 1984, Cameron conducted a “National Survey,” supposedly for the purpose of accurately and dispassionately quantifying the behavior of gay men and lesbians. The “National Survey” study was intended to provide the world with accurate information about the sexual activity of gay men and lesbians throughout the US. This survey therefore drew upon responses from the citizens of seven municipalities (Bennett (NE), Denver (CO), Los Angeles (CA), Louisville (KY), Omaha (NE), Rochester (NY), and Washington (DC)); data from Dallas (TX) was added later. However, at least six serious errors have been identified in Cameron’s sampling techniques, survey methodology, and interpretation of results. Any one of these errors, on its own, would render Cameron’s conclusions highly suspect – the combination of all six errors results in the generation of data which is completely meaningless. The six errors are discussed below:
1) There is nothing “national” about data derived from only eight municipalities. By deriving data only from respondents living within these eight municipalities, Cameron systematically excluded all US adults who resided elsewhere. At best and assuming otherwise flawless sampling techniques, methodology, and interpretations, Cameron’s “findings” could be extrapolated only to the populations of the eight municipalities in question. However, there was nothing flawless about the sampling techniques utilized within these eight municipalities, as will be discussed in (2) below.
2) Cameron never reported the response rate he obtained within each of these eight localities. Instead, Cameron reported a “compliance rate,” where the “compliance rate” was the percentage of respondents in each city who returned the survey form after actually being contacted and given the survey form. In other words, Cameron omitted the vast majority of respondents who simply refused to participate in the survey (some of these people refused to accept the survey form and wanted nothing to do with the study). There are major differences between people who refuse to participate in a study and those who choose to participate in a study, particularly when the information gleaned from the study is highly sensitive and personal in nature. This was an error that any first-year student of inferential statistics would recognize in a heartbeat. Cameron reported a compliance rate of 43.5% for the seven-city survey (which was later corrected to 47.5%) and a 57.7% compliance rate for the Dallas survey. The actual response rates, given the above distortion, were much, much lower. Usage of the “compliance rate” was grossly misleading because it excluded the large number of households within the eight cities who were never successfully contacted (the so-called “not-at-homes”). Legitimate research of this nature requires that the researcher report the true response rate – the actual number of completed surveys divided by the total number of households initially targeted by the survey. Using Cameron’s own data, the true response rate for the seven-municipality survey was a mere 23.6; the response rate for the Dallas survey was a mere 20.7%; and (using appropriate weighting techniques in these calculations) the overall response rate across all eight municipalities was approximately 23%. More than three out of every four households targeted for this survey either refused, outright, to participate in the survey; accepted a survey form but failed to return it; or could not be contacted. This pitifully low response rate makes it impossible to take Cameron’s conclusions seriously. While there is no uniformly accepted figure for a “good” response rate, it is clear that the Cameron surveys relied not upon a random sample but instead upon a convenience sample. It is impossible to generalize from a convenience sample to an entire population with any confidence in the legitimacy of the generalization; yet this is precisely what Cameron attempted. Again, it should be stressed that Cameron is not a fool, nor is he naïve, leading inevitably to the inference that his publication of these “results” was motivated by raw animus to the class of persons targeted by the “survey” (gay Americans).
3) Had Cameron’s 1983 – 1984 combined sample been a true random sample (which it most certainly was not, as discussed in (2) above), it would have been large enough (N = 5,182) to permit Cameron to make estimates of general population characteristics with only a small margin of error. This, however, is moot, due to the extremely low response rates and the fact that Cameron employed a convenience sample instead of a random sample. However, even if one makes the assumption that Cameron’s sample was a random sample (which it was not), Cameron tried, in several papers, to make reliable estimates about the characteristics of extremely small subgroups within this sample. For example, Cameron identified a total of 17 respondents within their 1983 – 1984 samples who claimed to have a gay parent. Cameron then scrutinized the questionnaires completed by these 17 respondents for negative sexual experiences, one of which was incestuous sexual activity with a gay parent. Of the 17 respondents who were asked whether they had ever experienced an incestuous sexual encounter with their gay parents, five answered in the affirmative. This enabled Cameron to argue that 29% (five divided by 17) of gay Americans have incestuous relationships with their parents, as opposed to only 0.6% of the children of heterosexual parents, and that “having a homosexual parent(s) appears to increase the risk of incest with a parent by a factor of about 50.” Reliance upon such a small subset of respondents is invalid due to the fact that data from such a ridiculously low sample have an unacceptably high margin of sampling error. In a true random sample of 17 (and this was not a true random sample of 17, as discussed in (2) above), the margin of error due to sampling (with a confidence level of 99%) is plus-or-minus 33%. Thus, had the subset of 17 people been drawn from a true random sample (which it was not), all that one would have been able to conclude from Cameron’s data is that the true proportion of adults who have a gay parent and who have been sexually abused by that parent is anywhere from -4% (effectively zero) and 62%. Such a wide margin of error renders the result completely meaningless. Furthermore, because the confidence interval includes zero, Cameron could not legitimately conclude that the true number of children of gay parents (in the eight municipalities sampled) who were the victims of gay incest was actually different from zero.
4) The validity of the questionnaire items was most doubtful. Data derived from self-reporting is useful only to the extent that respondents answer the questions truthfully and honestly. When participants give incorrect or unreliable answers to questions, it is either because (1) they are unable to give accurate responses or (2) because they are unwilling to give accurate responses. In Cameron’s “survey,” reasons exit to assume that both factors operated. Cameron’s questionnaires contained 550 items and took, on average, at least 75 minutes to complete. A large number of questions dealing with highly sensitive aspects of human sexuality were included, in some cases in a very complex format. The problems of respondent fatigue and item difficulty both played a role in reducing the validity of the questionnaire. Respondent fatigue is particularly likely to creep into a lengthy survey that takes more than an hour to complete. It is possible to control for respondent fatigue by repeating some of the questions asked earlier in the test, towards the end of the test (if discrepancies are noted with consistency, the test should be revamped to reduce respondent fatigue). Cameron did not utilize any such consistency checks in his questionnaire. Furthermore, some of the questions, in addition to being extremely sensitive, were presented in extremely complex multiple-choice format. In one section, for example, respondents were expected to read a list of 36 categories of persons (e.g., my female grade school teacher, my male [camp, Y, Scout] counselor), then to note the age at which each person made serious sexual advances to me, then to note the age at which each person had experienced physical sexual relations with me, and then to report the total number of people in each category with whom the respondent had sexual relations. Another item asked respondents why they thought they had developed their sexual orientation, and gave a checklist of 44 reasons, including I was seduced by a homosexual adult, I had childhood homosexual experiences with an adult, and I failed at heterosexuality. Many respondents probably became confused, tired, and alienated by the content of some of the questions. In addition, when presented with long lists of alternatives, many respondents may have skipped the lower items on the list, or read them incompletely. Another validity problem that can arise when dealing with such complex issues takes the form of respondents intentionally giving incorrect information. Any test or questionnaire based on self-reporting relies on the honesty of the participants to include full and accurate information, and many respondents may have been made uncomfortable by some of the questions that were asked. One way in which an experienced psychologist can reduce the likelihood of false or malicious answers being given is to ensure the respondents that their answers will remain anonymous (as opposed to confidential). This procedure is utilized in cases where the subject matter is complex and in which respondents do not wish for their names to be associated with their answers. Cameron’s own notes and conclusions imply that the questionnaire that he distributed was not anonymous. (There is a big difference between confidentiality and anonymity, and Cameron may have promised only the former.) The manner in which the questionnaires were presented almost certainly impacted negatively on the validity of the results; complete strangers simply arrived on the doorsteps of the respondents, without any affiliation with a prestigious University, college, or institute. In Bennett (NE), the local newspaper actually reported on advice given by a police offer to a neighbor not to complete the survey. Furthermore, it is entirely possible that some people may have used this opportunity to sabotage the test by giving outrageous and inaccurate answers to the more sensitive questions (e.g. exaggeration of sexual activity, exaggeration of participation in multiple unconventional sexual acts, imputing instances of incest, etc.). As discussed earlier, Cameron’s analysis of subgroups was particularly sensitive to fake answers because of the tiny numbers of people involved (17 people stated that they had a gay parent; two or three exaggerated answers would have dramatically skewed the results). The impact of mischief-makers is maximized when dealing with very small subset samples, as occurred in Cameron’s case. Furthermore, nobody from the study was present with the participants when they completed their questionnaires – a factor which could have played a dramatic role in permitting mischief-makers to skew the results.
5) The interviewers may have been biased and may not have followed uniform procedures. Professional survey organizations go to considerable lengths to ensure that testers approach the issues in question from a non-biased and non-judgmental viewpoint; they strictly follow standardized procedures and communicate a neutral, non-judgmental attitude towards the respondents. Furthermore, the interviewers frequently know nothing of the goals of the survey. It is impossible to know whether Cameron followed this protocol. In his published report, he made no reference to such quality control procedures, which in and of itself implies that he did not employ such procedures. It is not clear whether a supervisor randomly contacted some of the respondents in order to ensure that the respondent had, himself or herself, taken the test (numerous studies have been sabotaged by lazy administrators failing to distribute the tests properly; some such administrators may complete several tests themselves, in order to skip the hard work of going from door to door). Such controls would have strengthened the validity of Cameron’s findings; the fact that they were not mentioned in his report suggests that they were not implemented in the field. More serious, however, is the undisputed fact that several high-level members of the research team were active in distributing the questionnaires and collecting the data. This is problematic because these people can be expected to have strong biases and a vested interest in the outcome – an interest that can cause them to transmit their expectations to the respondents (this is why people who have no knowledge of the objective of the study are usually employed to gather data).
6) Cameron made his bias known during the period that the survey was being conducted. In order to study a social phenomenon, researchers take great care to ensure that the individuals being studied do not become aware of the expectations or goals of the research in question. Should the subjects become aware of the goals or expectations of the researchers, then the subjects may deliberately tailor their answers to thwart or to encourage the expectations of the researchers. Cameron ignored this universally accepted caution and made headlines in Omaha, NE (one of the cities selected for his “nationwide” research), characterizing his survey as providing "ammunition for those who want laws adopted banning homosexual acts throughout the United States," and he was quoted as saying that the survey's sponsors were "betting that (the survey results would show) that the kinds of sexual patterns suggested in the Judeo-Christian philosophy are more valid than the Playboy philosophy." During the course of conducting his survey, Cameron was publicly vocal in his support for a proposed quarantine of all gay people (he spoke out publicly about this proposal in Houston at the same time that the survey was being conducted in Dallas). It is entirely possible that respondents in other cities became aware of Cameron’s goals and deliberately decided not to participate in the survey, or decided to give answers reflecting their personal bias and their personal desire to shape public policy.
Then there is the question of the publications in which Cameron published the results of his research. Research studies are often evaluated in terms of the prestige of the scientific journals in which they are published, as well as in terms of the number of times these studies are cited in the literature by other researchers and scholars. The Social Sciences Citation Index (SSCI) and the Journal Citation Reports (JCR) provide objective measures for these criteria, respectively.
The SSCI is a quarterly publication that lists, alphabetically by author, all articles that have been cited in scientific journals during that time period and the bibliographic reference for the articles that have cited them.
The JCR compiles data from the SSCI to report an impact factor for individual academic journals. The impact factor describes the average frequency with which articles in a particular journal are cited. It is computed as the number of times any article from that journal is cited during the first two years following its publication divided by the total number of articles published in that journal during the time period. To provide a simplified example, suppose that a particular journal published 25 articles in 1990, and those 25 articles were subsequently cited a combined total of 125 times between 1990 and 1992. The journal's impact factor for 1990 would be 125/25 or 5.0. Although the impact factor has limitations, it is widely used by librarians, information scientists, and researchers from a variety of disciplines as an objective indicator of a journal's quality, value, and impact.
Cameron’s “research” has been published in four very low quality journals (i.e. journals with a very low impact factor). Most of his “research” was published in one journal named "Psychological Reports". Unlike prestigious journals, Psychological Reports charges the researcher a fee by the page to print the so-called research. This journal is, in fact, a vanity journal in which “researchers” may get material published which would be rejected by prestigious and highly regarded journals. Cameron himself once described another journal in which his “research” has been published as “obscure.”
Based on data from the SSCI, Cameron’s work had almost no impact whatsoever on the literature.
It should be clear, taking all of the above issues into consideration, that Cameron and his acolytes are skilled liars and fraud artists. Cameron’s “work” has been savaged by other, reputable researchers, and the few times in which his articles have been cited in the professional literature have taken the form of critiques of his methodology. This is a man who is little more than a hired gun with a veneer of academic respectability. He is not interested in legitimate scientific research – to the contrary, he is committed to abusing research protocols in an effort to lend credence to his quackery and his efforts at extreme right-wing social engineering.
PHILIP CHANDLER
1 comment:
Good Work! Informative and interesting!
Post a Comment