Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'IS Citation Analysis'

An Exploratory Study of Information Systems Researcher Impact

Roger Clarke **

Review Version of 22 July 2007

© Xamax Consultancy Pty Ltd, 2006-07

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/SOS/CitAnal0707.html

The previous version is at http://www.rogerclarke.com/SOS/CitAnal0605.html


Abstract

Citation-counts of refereed articles are a potentially valuable measure of the impact of a researcher's work, in the information systems discipline as in many others. This paper reports on a pilot study of the apparent impact of IS researchers, as disclosed by citation-counts of their works in the Thomson/ISI collection and through Google Scholar. Citation analysis using currently available indexes is found to be fraught with many problems, and must be handled with great care.


Contents


1. Introduction

Information systems (IS) is a maturing discipline, with a considerable specialist literature, and relationships with reference disciplines that are now fairly stable and well-understood. In a mature discipline, various forms of 'score-keeping' are undertaken. One reason for this is as a means to distinguish among applicants for promotion, and contenders for senior appointments. A further application of score-keeping is as a factor in the allocation of resources to support research. In some countries, this application is increasingly significant.

One approach to score-keeping is to count the number of works that a researcher publishes, and treat it as a measure of research quantum. Another is to moderate the count of works by the time-span over which they were published, the categories of publication (such as books, conference papers and journal articles), and the quality of the venues. This represents a measure of research quality rather than quantity. Yet another approach is to count the number of citations of those publications, in order to generate an indicator of the researcher's impact.

This paper performs an analysis of citations of IS researchers, in order to examine the extent to which currently available data provides satisfactory measures of researcher impact. It is motivated by the concern that, whether or not such analyses are performed by members of the IS discipline, others will do it for us.

The paper commences by briefly reviewing formal schemes for appraising researcher quality and impact that have been implemented or proposed in several countries in recent years. It then discusses citation analysis, and its hazards. The research objectives and research method are described. The raw scores generated from two major sources are presented, and issues arising from the analysis are identified and examined.


2. Schemes to Assess Researcher Impact

In a number of countries in recent years, there have been endeavours to implement mechanisms for evaluating the impact of individual researchers and research groups. These have generally been an element within a broader activity, particularly relating to the award of block grants to research centres. Examples include the U.K. Research Assessment Exercise (RAE 2001, 2005), which has been operational since 1986, the New Zealand Performance-Based Research Fund (PBRF 2005), and the emergent Australian Research Quality Framework (RQF - DEST 2005, 2006 and 2007), currently in train.

The procedure in the most recent RAE in 2001 was described as follows: "Every higher education institution in the UK may make [submissions] ... Such submissions consist of information about the academic unit being assessed, with details of up to four publications and other research outputs for each member of research-active staff. The assessment panels award a rating on a scale of 1 to 5, according to how much of the work is judged to reach national or international levels of excellence" (RAE 2001, p. 3). The forthcoming 2008 Exercise also involves evaluation of "up to four items ... of research output produced during the publication period (1 January 2001 to 31 December 2007) by each individual named as research active and in post on the census date (31 October 2007)" (RAE 2005, p. 13). Similarly, in the New Zealand scheme, a major part of the 'quality evaluation' depends on assessment of the 'evidence portfolio' prepared by or for each eligible staff-member, which contains up to four 'nominated research outputs' (PBRF 2005, p. 41).

The Australian RQF seeks to measure firstly "the quality of research", which includes "its intrinsic merit and academic impact", and secondly the "broader impact or use of the research", which refers to "the extent to which it is successfully applied in the broader community". The outcomes of the measurements would be rankings, which would be used in decision-making about the allocation of research-funding. The unit of study is 'research groupings', which are to be decided by each institution, but will be subject to considerable constraints in terms of disciplinary focus and minimum size (DEST 2005, 2006 and 2007). The RQF envisages having each research grouping assessed against 5-point scales for each of quality (apparently including impact on researchers) and "impact ... in the wider community".

The design of each of the three schemes involves lengthy and bureaucratic specifications of how research groupings and individuals are to fill in forms, including definitions of the publications that can be included, large evaluation panels comprising people from disparate disciplines, and lengthy and bureaucratic assessment processes. The RAE in particular has been a very highly resource-intensive activity. A recent report suggests that itthe RAE is shortly to be abandoned, on the basis that it has achieved its aims (MacLeod 2006).

The schemes distinguish between research quality and research impact. For research quality, the RQF, for example, appears to have adopted a modified form of the RAE approach, with each research grouping to be allocated a rating on a 5-point scale based on the relevant panel's assessment of the "contribution" of the research grouping's "research work", and the "significance" of the area in which the work is undertaken. Research quality is partially relevant to the focus of this paper, because of the need to take into account the quality of the venues in which papers are published, and in which citations appear.

As regards research impact, the schemes distinguish two categories. Measures of 'broader impact' on business, government and the community are challenging to devise, not least because less formal publications such as the trade press, professional magazines and newsletters, and even some government reports, are sometimes coy and often imprecise in their attribution to sources. This paper focusses primarily on impact on research and researchers.

A key indicator of a researcher's impact is evidence of the use of their refereed publications by other researchers, which is an indicator of the extent to which they have contributed effectively to the 'cumulative tradition' (Keen 1980). Counts of citations within journals and refereed conference proceedings are a direct measure. Other indicators include re-publication such as inclusion in collections and anthologies, and translations. Publications impact may be highly focussed, even on a single work, or may involve the summation of many works. Impact as measured by citation-count might be regarded as merely one aspect of a broader concept such as reputation or esteem. In that case, further indicators could include international appointments, prizes and awards, memberships of academies and editorial boards, keynote-speaker invitations, and collaborations with other highly-reputed researchers.

Schemes such as the RAE, PBRF and RQF are political in nature, designed to provide a justification for funds-allocation decisions. This paper is concerned with whether the use of citation analysis would provide a reasonable basis for evaluating the impact of IS researchers on other researchers, or, in RQF terms, their "academic impact". A weakness of the paper is that it works within the backward-looking approach adopted by the evaluative regimes, rather than seeking ways to anticipate which individuals and teams are poised to undertake breakthrough research if only they are provided with sufficient resources.


3. Citation Analysis

This section briefly reviews the notion and process of citation analysis, including its use to date in IS.

3.1 The Concept

"Citations are references to another textual element [relevant to] the citing article. ... In citation analysis, citations are counted from the citing texts. The unit of analysis for citation analysis is the scientific paper" (Leydesdorff 1998). Leydesdorff and others apply citation analysis to the study of cross-references within a literature, in order to document the intellectual structure of a discipline. This paper is concerned with its use for the somewhat different purpose of evaluating the quality and/or impact of works and their authors by means of the references made to them in refereed works.

Authors have cited prior works for centuries. Gradually, the extent to which a work was cited in subsequent literature emerged as an indicator of the work's influence, which in turn implied significance of the author. Whether the influence of work or author was of the nature of notability or notoriety was, and remains, generally ignored by citation analysis. Every citation counts equally, always provided that it is in a work recognised by whoever is doing the counting.

Attempts to use citation counts to formally measure the quality and/or impact of works, and of their authors, is a fairly recent phenomenon. Indeed, the maintenance of citation indices appears to date only to about 1960, with the establishment of the Science Citation Index (SCI), associated with Garfield (1964). SCI became widely available only in 1988, on CD-ROM, and from 1997 on the Web (Meho 2007).

Considerable progress in the field of 'bibliometrics' or 'scientometrics' has ensued. The purposes to which citation analysis can be put include "1) paying homage to pioneers, 2) giving credit for related work, 3) substantiating one's knowledge claims, 4) providing background reading, 5) articulating a research methodology, 6) criticizing research, and 7) correcting one's earlier work" (Garfield 1977, as reported in Hansen et al. 2006).

SCI, and other initiatives discussed below, fall far short of the vision of the electronic library, which was conceived by Vannevar Bush (1945), and articulated by Ted Nelson in the 1960s as 'hypertext'. As outlined in Nelson's never-completed Project Xanadu, the electronic library would include the feature of 'transclusion', that is to say that quotations would be included by precise citing of the source, rather than by replicating some part of the content of the source.

3.2 Recent Developments

During the last 15 years, SCI has been subject to increasing competition. The advent of the open, public Internet, particularly since the Web exploded in 1993, has stimulated many developments. Individual journal publishing companies such as Elsevier, Blackwell, Kluwer, Springer and Taylor & Francis have developed automated cross-linkage services, at least within their own journal-sets. Meanwhile, the open access movement is endeavouring to produce not only an open eLibrary of refereed works, but also full and transparent cross-referencing within the literature. A leading project in the area was the Open Citation (OpCit) project in 1999-2002. An outgrowth from the OpCit project, the Citebase prototype, was referred to as 'Google for the refereed literature'. It took little time for Google itself to discover the possibility of a lucrative new channel: it launched Google Scholar in late 2004.

It is to be expected that citation analysis will give rise to a degree of contention, because any measure embodies biases in favour of some categories of researcher and against others. Dissatisfaction with it as a means of evaluating the quality and impact of works and of researchers has a long history (Hauffe 1994, MacRoberts & MacRoberts 1997, Adam 2002). A simple citation-count, for example, favours longstanding researchers over early-career researchers, because it takes time firstly to achieve publications, secondly for other researchers to discover and apply them, and thirdly for their publications to appear. Using citations per paper, on the other hand, favours researchers with few publications but one or two 'big hits' over prolific researchers who dissipate their total count over a larger denominator. Many observers consider that a meaningful measure can only be achieved by taking into account the quality of the publishing venues in which the citations appear, and in which the paper itself was published.

Various proposals have been put forward for particular measures that can be used for particular purposes. Hirsch's proposal for an 'h-index' has unleashed a flurry of activity (Hirsch 2005). This involves sorting a person's publications in descending order of citations, then counting down them until the publication-count matches the citation-count. Hence a person with an h-index of 15 has 15 papers with at least 15 citations each. The measure is argued to balance quantity against impact. Hirsch is a physicist. Physics has a relatively very large population of academics. It has the best-developed publication-and-citation mechanisms of any discipline, and unlike many others has not ceded control over its output to for-profit publishers. It appears that the values of h-index achieved, and the measure's effectiveness, are both highly dependent on the number of academics and publications in the discipline; and of course on the reliability of the citation-data.

A range of refinements to the h-index have been proposed, and are summarised by Harzing (2007). These endeavour to balance the measure for such factors as time in the discipline, the distribution of the citation-counts, the length of time since each work was published, and the number of co-authors. Harzing provides a software tool called Publish or Perish, which computes the various indices based on citation-searches conducted using Google Scholar.

3.3 Citation Analysis in IS

Within the IS discipline, there is a long history of attention being paid to citations. The primary references appear to be Culnan (1978, 1986), Culnan & Swanson (1986), Culnan (1987), Cheon et al. (1992), Cooper et al. (1993), Eom et al. (1993), Holsapple et al. (1993), Eom (1996), Walstrom & Leonard (2000), Vessey et al. (2002), Schlogl (2003), Katerattanakul & Han (2003), Galliers & Meadows (2003), Hansen et al. (2006) and Whitley & Galliers (2007). That is a fairly short list of articles. Moreover, a 'General Search' on the ISI database shows that the most cited among them (Holsapple et al.) had only accumulated a count of 24 in April 2006 (which had risen to 30 in June 2007). If instead the ISI 'Cited Ref' facility is used, and a deep and well-informed analysis is conducted, then the most cited paper is established by combining several counts for a total of 61 for Culnan (1987) - up to 66 on re-visit in June 2007. On Google Scholar, the largest citation-count in April 2006 appeared as 63, for each of Culnan (1986) and Culnan (1987). When the test was repeated on 30 June 2007, the Culnan counts were 81 and 78, with Holsapple et al. up to 64. As will be shown, these are not insignificant counts, but they are not large ones either; and the difficulties and ambiguities involved in generating them represent a mini-case study in the application of citation analysis.

The primary purposes of the research reported in the papers listed above have been to develop an understanding of the intellectual structure of the IS discipline, of patterns of development within the discipline, and of the dependence of IS on reference disciplines. In some cases, the impact of particular journals has been in focus (in particular Cooper et al. 1993, Holsapple et al. 1993 and Katerattanakul & Han 2003). In one instance (Walstrom & Leonard 2000), highly-cited articles were the primary concern. Another, Hansen et al. (2006) reported on a deep analysis of the ways in which citing articles used (and abused) the cited paper. Galliers & Meadows (2003) used it to assess globalism and parochialism in IS research papers, and Whitley & Galliers (2007) analysed citations as a means of determining the characteristics of the European IS research community. The literature search conducted as part of the present project did not identify any articles in which the primary focus of the citation analysis was on individual researchers or research groupings.

A number of deficiencies in the use of citation analysis for this purpose are apparent from the outset. In the course of presenting the research, more will emerge, and a consolidated list is provided at the end of the paper. Despite these deficiencies, 'score-keeping' is increasingly being applied to the allocation of research resources. The work reported on here accordingly has significance as scholarship, but also has a political dimension.


4. The Research Purposes and Method

Because little prior research has been conducted in this specific area, the essential purpose was to provide insights into the effectiveness of citation analysis applied to individual IS researchers. Consideration was given to generating a list of highly-cited articles, and working down the list, accumulating total counts for each individual. Preliminary experimentation showed, however, that this would be too narrow a study to satisfy the aims. It was concluded that the more appropriate approach was to focus on individuals rather than publications.

Because of the vagaries of databases that are organised primarily on names, considerable depth of knowledge of individuals active in IS research is needed in order to achieve a reasonable degree of accuracy. The project accordingly focussed on researchers known to the author. In-depth analysis was first conducted in relation to an extensive list of academics active from the late 1970s to 2005 in the author's country of long-term residence. This was appropriate not only as a means of achieving reasonable data quality, but also because the scale was manageable: there are currently about 700 members of the IS discipline in Australia, little over 100 of whom have substantial publishing records during the period; and researcher impact assessment is a current political issue. Using the expertise gained in that pilot study, similar analyses were then performed for some leading researchers in North America and Europe.

One important insight that was sought was the extent to which the publishing venues that are generally regarded by IS researchers as carrying quality refereed articles are reflected in the databases that are available to support citation analysis. Rather than simply tabulating citation-counts, the research accordingly commenced by establishing a list of relevant journals and conference proceedings.

The set of venues was developed by reference to the now well-established literature on IS journals and their rankings, for which a bibliography is provided at Saunders (2005). Consideration was given to the lists and rankings there, including the specific rankings used by several universities, and available on that site. Details of individual journals were checked in the most comprehensive of the several collections (Lamp 2005).

The set selected is listed in the first two columns of Exhibit 1. (The remainder of the columnar format will be explained shortly). The inclusions represent a fairly conventional view of the key refereed journals on the management side of the IS discipline. The list significantly under-represents those journals that are in reference discplines generally, especially in computer science, and at the intersection between IS and computer science. The reason for this approach is that otherwise a very large number of venues would need to be considered, and many included, in which the large majority of IS researchers neither read nor publish. Less conventionally, the list separates out a few 'AA-rated' journals, and divides the remainder into general, specialist and regional journals. There is, needless to say, ample scope for debate on the classifications; but it was designed to aid the analysis, and did so.

Exhibit 1: Refereed Venues Selected

Journal NameJournal Abbrev.SSCISCIIssues Included
  
AA Journals (3)    
Information Systems ResearchISRY Only from 1994, Vol. 4 ?
Journal of Management Information SystemsJMISY Only from 1999, Vol. 16!
Management Information Systems QuarterlyMISQY Only from 1984, Vol. 8!
  
AA Journals in the Major Reference Disciplines (4)    
Communications of the ACM (Research Articles only)CACM YFrom 1958, Vol. 1
Management ScienceMSY From 1955, Vol. 1
Academy of Management JournalAoMJY From 1958, Vol. 1
Organization ScienceOSY From 1990, Vol. 1?
  
A Journals - General (9)    
Communications of the AIS (Peer Reviewed Articles only)CAIS  None!
DatabaseData Base YOnly from 1982 Vol. 14 ?
Information Systems FrontiersISF YOnly from 2001, Vol. 3
Information Systems JournalISJY Only from 1995, Vol. 5
Information & ManagementI&MY Only from 1983, Vol. 6
Journal of the AISJAIS  None!
Journal of Information SystemsJIS  None!
Journal of Information TechnologyJITY Only 18 articles
WirtschaftsinformatikWI YOnly from 1990, Vol. 32
  
A Journals - Specialist (15)    
Decision Support SystemsDSS YOnly from 1985, Vol. 1
Electronic MarketsEM  None
International Journal of Electronic CommerceIJECY From 1996, Vol. 1
Information & OrganizationI&O  None
Information Systems ManagementISMY Only from 1994, Vol. 11
Information Technology & PeopleIT&P  None!
Journal of End User ComputingJEUC  None
Journal of Global Information ManagementJGIM  None
Journal of Information Systems EducationJISE  None
Journal of Information Systems ManagementJISM  None
Journal of Management SystemsJMS  None
Journal of Organizational and End User ComputingJOEUC  None
Journal of Organizational Computing and Electronic CommerceJOCEC  None
Journal of Strategic Information SystemsJSIS YFrom 1992, Vol. 1 ?
The Information SocietyTISY Only from 1997, Vol. 13! - ?
  
A Journals - Regional (3)    
Australian Journal of Information SystemsAJIS  None
European Journal of Information SystemsEJIS YOnly from 1995, Vol. 4
Scandinavian Journal of Information SystemsSJIS  None

Consideration was given to supplementing the journals with the major refereed conferences. In this author's experience, ICIS is widely regarded as approaching AA status, ECIS as a generic A, and AMCIS, PACIS and ACIS may be considered by some as being generic A as well. These are accessible and indexed in the Association for Information Systems' AIS eLibrary, in the case of ICIS since it commenced in 1980, ECIS since 2000, and ACIS since 2002. The analysis focussed, however, primarily on journals.

As the next step, a survey was conducted of available citation indices. It was clear that Thomson / ISI needed to be included, because it is well-known and would be very likely to be used by evaluators. Others considered included:

Elsevier's Scopus has only been operational since late 2004. The next three are computer science indexes adjacent to IS, and at the time the research was conducted the last of them was still experimental. The decision was taken to utilise the Thomson/ISI SCI and SSCI Citation Indices, and to extract comparable data from Google Scholar. A more comprehensive project would be likely to add Scopus into the mix.

The third and fourth columns of Exhibit 1 show whether the journal is included in the Thomson/ISI SCI or SSCI Citation Indices. The final column shows the inferences drawn by the author regarding the extent of the Thomson/ISI coverage of the journal. Many problem areas were encountered, are reported on below, and are highlighted in the final column of Exhibit 1 in bold-face type.

The next step taken was to assemble a list of individuals active in the field in Australia during the relevant period. It is challenging to be sure of being comprehensive. People enter and depart from the discipline. Topic-areas do as well. For example, Ross Jeffery has focussed on empirical software engineering since about 1980. This can be defined to be within the IS discipline, outside it, or (most reasonably) both depending on the phase of history being discussed. There are overlaps with the Computer Science discipline, with various management disciplines, and with the IS profession. When determining the set of IS academics in a particular country, immigration, emigration and expatriates create definitional challenges.

The author drew on his experience in the field since about 1970. This includes the establishment of the Australian IS Academics Directory (Clarke 1988), and involvement in the subsequent incarnations as the Australasian IS Academics Directory (Clarke 1991), the Asia Pacific Directory of IS Researchers (Gable & Clarke 1994 and 1996), and the (A)ISWorld Faculty Directory (1997-).

The research was conducted within the context of a broader project to document the history of the IS discipline in Australia, reported on in Clarke (2006b). This included the development of a directory of full IS Professors in Australia (Clarke 2007). Somewhat arbitrary decisions were taken as to who was an expatriate Australian, and how long immigrants needed to be active in Australia to be treated for the purposes of this analysis as being Australian. Data was sought in relation to about 100 leading Australian IS researchers, plus 4 well-known and successful expatriates.

Data was extracted from the SCI and SSCI citation indices over several days in late January 2006. Access was gained through the ANU Library Reverse Proxy, by means of the company's 'Web of Science' offering. Both sets of searches were restricted to 1978-2006, across all Citation Indices (Science Citation Index - SCI-Expanded, Social Sciences Citation Index - SSCI and Arts & Humanities Citation Index - A&HCI). Multiple name-spellings and initials were checked, and where doubt arose were also cross-checked with the AIS eLibrary and the (A)ISWorld Faculty Directory.

The same process was then applied to a small sample of leading overseas researchers. The selection was purposive, favouring uncommon surnames in order to reduce the likelihood of pollution through the conflation of articles by multiple academics. The selection relied on this author's longstanding involvement in the field internationally, and his knowledge of the literature and the individuals concerned. About 25 were included in the sample.

Subsequently, Google Scholar was searched for the 30 Australian researchers who were most highly cited in the Thomson/ISI collection, and for the whole set of leading North American and European researchers. Supplementary research was then undertaken within the Thomson/ISI database. These elements were performed in respectively early and late April 2006. During the 3 months between the two rounds of analysis using ISI, the database and hence the citation-counts of course continued to accumulate, and some changes were apparent in both the Google collection and the Google service.

Some re-sampling was undertaken in June 2007, in order to provide information about the stability of the data collections, and the rate of change of citation-counts. Further experiments were performed, in order to enhance understanding of the quality of the counts. The following section reports on the results of the Thomson/ISI study.


5. Thomson/ISI

Thomson Scientific owns a service previously known as the Institute for Scientific Information (ISI). It publishes the Science Citation Index (SCI) and the Social Science Citation Index (SSCI) as two of three elements of its 'Web of Science' product. In January 2006, its site stated that SCI indexes 6,496 'journals' (although some are proceedings), and that SSCI indexes 1,857 'journals'. On 2 July 2007, the corresponding figures appeared to be 6,700 and 1,986, suggesting gradual accretion rather than any surge in inclusiveness. The company's policies in relation to inclusion (and hence exclusion) of venues is explained at http://scientific.thomson.com/mjl/selection/. An essay on the topic is at Thomson (2005). This section presents the results of the pilot citation analyses of the Thomson/ISI collection, hereafter referred to more briefly as ISI.

5.1 Method

The data collected for each of the authors was the apparent count of articles, and the apparent total count of citations. The ISI site provides several search-techniques. The search-technique used was the 'General Search'. It was selected partly because it is the most obvious, and hence the most likely to be used by someone evaluating the apparent contribution of a particular academic or group of academics. It is also the most constrictive definition available, and hence could be argued to be the most appropriate to use when evaluating applicants for the most senior or well-endowed research appointments, and when evaluating the reputations of the most prominent groups of researchers. Other possible approaches are discussed in the final sub-section.

The ISI General Search provides a list of articles by all authors sharing the name in question, provided that they were published in venues that are in the ISI database. For each such article, a citation-count is provided, which is defined as the number of other articles in the ISI database that cite it. (It should be noted that although the term 'citation' is consistently used by all concerned, the analysis appears to actually utilise the entries in the reference list provided with the article, rather than the citations that appear within the text of the article).

The search-terms used in this study comprised author-surname combined with author-initial(s), where necessary wild-carded. Where doubt arose, AIS resources and/or the researcher's home-page and publications list were consulted. No researchers were detected in the sample who had published under different surnames, but multiple instances were detected in which initials varied. In the case of Australian researchers with common names, the search-terms were qualified with 'Australia'. The date-range restricted to 1978 onwards. Each list that was generated by a search was inspected, in order to remove articles that appeared to the author to be by people other than the person being targetted.

Experiments conducted with Hirsch's h-index showed it to be impracticable, because of the many problems with the ISI data, particularly the misleadingly low counts that arise for all IS academics, and especially for those who are not leading researchers.

5.2 Citation-Counts for Australian IS Researchers

Exhibit 2 summarises the resulting data for the highest-scoring Australian IS academics. An arbitrary cut-off of 100 total citations was applied. (As it happened, none fell between 86 and 166). This resulted in the inclusion in the table of the 4 expatriates and 7 local researchers. Column 1 shows the total citation-count, and column 2 the number of papers found. Column 3 shows the citation-count for the person's most-cited paper, primarily to provide an indication of the upper bounds on citation-count-per-article.

Exhibit 2: ISI Data for Leading Australian IS Researchers, January 2006

CAVEAT: For reasons that are discussed progressively through the remainder of this paper, there are strong arguments for not utilising the data in this table, and for not utilising the ISI 'General Search', as a basis for assessing the impact of individual articles or individual researchers

 
Citation Count
Number of Articles

Largest Per-Article Count

Expatriates   
Iris Vessey (as I)
601
35
111
Rick Watson (as RT), since 1989
485
28
78
Ted Stohr (as EA)
217
12
108
Peter Weill (as P) , since 2000
178
13
47
Locals
Marcus O'Connnor (as M)
354
31
66
Ron Weber (as R)
328
22

38

Philip Yetton (as P and PW), since 1975
270
26
57
Michael Lawrence (as M)
208
27
66
Michael Vitale (as M and MR), since 1995
179
14
107
Ross Jeffery (as DR, and as R)
172
28
38
Marianne Broadbent (as M)
166
24
36

5.3 Citation-Counts for A Few Leading International IS Academics

Exhibit 3 shows the same data as for Australian IS academics in Exhibit 2, but for some well-known leaders in the discipline in North America and Europe. About 25 individuals were selected. The same arbitrary cut-off of 100 total citations was applied.

Exhibit 3: ISI Data for A Few Leading International IS Academics, January 2006

CAVEATS:

  1. For reasons that are discussed progressively through the remainder of this paper, there are strong arguments for not utilising the data in this table, and for not utilising the ISI 'General Search', as a basis for assessing the impact of individual articles or individual researchers
  2. The selection of individuals is emphatically not an attempt to identify the 'intellectual leaders' in the IS discipline (which would require a much more careful and rather different research design). People were chosen who the author considered (a) were likely to have relatively high counts, but crucially also (b) had distinctive names, in order to reduce the risks of conflating their publications with those of other people, and of overlooking relevant publications
  3. The count for Eph McLean is particularly seriously under-stated, and that for Sal March probably also. The reasons are important, and are addressed later in this paper

Citation Count
Number of Articles

Largest Per-Article Count

North American   
Lynne Markus (as ML)
1,335
39
296
Izak Benbasat (as I)
1,281
71
155
Dan Robey (as D)
1,247
45
202
Sirkka Jarvenpaa (as SL)
960
40
107
Detmar Straub (as D and DW)
873
49
160
Rudy Hirschheim (as R)
600
44
107
Gordon Davis (as GB)
428
48
125
Peter Keen (as PGW)
427
21
188
** Sal March (as ST)
190
29
37
** Eph(raim) McLean (as E and ER)
119
30
31
Europeans
Kalle Lyytinen (as K), but in the USA since 2001
458
55
107
Leslie Willcocks (as L)
231
42
28
Trevor Wood-Harper (as T, AT and TA)
200
26

50

Bob Galliers (as RD, R and B), but in the USA since 2002
185
37

25

Guy Fitzgerald (as G)
121
50
38
Enid Mumford (as E)
103
21
42

The relatively low counts of the leading European academics is interesting. Rather than undertaking a necessarily superficial analysis here, the question is left for other venues. But see Galliers & Meadows (2003) and EJIS (2007).

5.4 Quality Assessment

An important aspect of this exploratory study concerned the effectiveness of the available databases in reflecting the extent to which the individuals concerned were actually cited. This sub-section reports on several tests that were applied, which identified a substantial set of deficiencies.

(1) Exclusions - Generally

The collection's coverage is inadequate in a number of ways that affect all disciplines, rather than only IS, which inevitably results in misleadingly low citation-counts. The primary causes are the exclusion of:

(2) Exclusions - IS Specifically

To the author's knowledge, a number of well-respected IS journals have been unable to achieve inclusion in ISI, in some cases despite strong cases and repeated requests. Further, many branches of computer science, particularly those in rapid development and hence with a very short half-life for publications, have largely abandoned journals in favour of conference papers. This means that many leading computer scientists have very low scores on ISI, and so do members of the IS discipline who operate at the boundary with computer science - within the sample studied, notably Ross Jeffery and Sal March.

An examination was conducted of the ISI database's coverage of the selected publishing venues listed in Exhibit 1. Nothing was found on the ISI site that declared which Issues of journals were in the database, and it was necessary to conduct experiments in order to infer the extent of the coverage. The examination disclosed a wide range of omissions, as follows:

(3) Over-Inclusiveness

Some categories of material are inappropriately included, resulting in the inflation of the item-counts and citation-counts of some authors. Key concerns are:

(4) Vagaries of Name-Based Discovery

The initial(s) used by and for authors can seriously affect their discoverability. Several authors in the samples have published under two sets of initials - most awkwardly Ross Jeffery as D.R. as well as R.; and three researchers were detected who have publications under three sets of initials. Considerable effort was necessary in multiple cases among the c. 130 analyses performed, and the accuracy of the measures is difficult to gauge.

Niels Bjørn-Andersen suffers three separate indignities. Firstly, both ISI and Google Scholar largely exclude publications in languages other than English, including Danish. Secondly, ISI does not support diacritics, so 'ø' is both stored and rendered as 'o'. A few papers with a very small number of citations were found under 'Bjorn-Andersen'. The third problem discovered was that, for the three of his papers that appear to have attracted the most citations, his name has been recorded as 'BjornAndersen'. Those papers can be detected using several search-strategies, but not using the author's actual surname.

Given the problem discovered with hyphens, a further test was performed on Trevor Wood-Harper. This disclosed that the same problem occurred - and was further complicated by the existence of three different sets of initials. (Re-testing was not performed until June 2007, and the citation-count shown for him in Exhibit 3 was deflated slightly in an attempt to achieve closer correlation with the figures that would have likely been visible in April 2006).

(5) Specific Discovery Issues - Case 1

The low counts of several well-known scholars was surprising. Experiments were conducted. The most instructive related to Eph McLean. An article that, in this author's view at that time, could have been expected to be among the most highly-cited in the discipline (Delone & McLean's 'Information Systems Success: The Quest for the Dependent Variable') did not appear in his list. It was published in ISR, which would have been AA-rated by many in the discipline from its inception. But ISR is indexed only from 5, 1 (March 1994), whereas the Delone & McLean paper appeared in 3, 1 (March 1992).

Using the 'Cited Ref Search' provided by ISI also fails to detect it under McLean E.R. (which appears to be an error in ISI's collection), but does detect a single citation when the search is performed on 'INFORM SYST RES' and '1992'. It can, on the other hand, be located using <Author = Delone W.H.>, with 7 variants of the citation, all of which are misleadingly foreshortened to 'INFORMATION SYSTEMS'. These disclose the (relatively very large) count of 448 citations. This issue is re-visited in the section on Google Scholar.

(6) Specific Discovery Issues - Case 2

A further test was undertaken in order to provide a comprehensive assessment of the inclusion and exclusion of the refereed works of a single author.

A highly convenient sample of 1 was selected: the author of this paper. In the author's view, this is legitimate, for a number of reasons. This was exploratory research, confronted by many challenges, not least the problems of false inclusions and exclusions, especially in the case of researchers with common names. I have a very common surname, I have a substantial publications record, those publications are scattered across a wide range of topics and venues, and some of my papers have had some impact. Most crucially, however, I am well-positioned to ensure accuracy in this particular analysis because my publication-list is well-documented, and I know them all.

The outcome of the analysis was as follows:

The results of this comprehensiveness test has implications for all IS academics. On the broadest view, an appropriate measure of impact would take into account the citation-counts for all 63 refereed papers (possibly weighted according to venue); but only 15 are included (24%). A more restrictive scope would encompass journal-articles only, in which case the coverage was still only 13/36 (36%). At the very least, the core IS journals should be included, in which case ISI's coverage still only scores 13/25 (52%).

(7) Stability

A re-check of a small sample of searches was performed in June 2007. It appeared that the ISI collection was fairly stable during the intervening 14 months, with the only additional items detected being papers published after early 2006. There was a moderate growth in the counts, e.g. 25% for Peter Weill and 39% for the author of this paper, although from a base only one-quarter of Peter Weill's count.

5.5 Tentative Conclusions

The many deficiencies in the ISI database that were identified from these tests result in serious under-reporting of both article-counts and citation-counts. Some of the deficiencies would appear to affect all disciplines (e.g. the apparent incompleteness of journals that are claimed to be indexed, and the failure to differentiate refereed from unrefereed content in at least some journals). Many others would appear to mainly affect relatively new disciplines, such as the long delay before journals are included, and refusals to include some journals even when appropriate representations are made.

When applied to IS researchers, the extent of the under-reporting is not easily predictable, although it does appear to depend heavily on the individual's areas of interest and preferred venues. In summary:

The many deficiencies result in differential bias. The measures provided for the sample of leading researchers appear as if they may bear some resemblance to reality; but the vast majority of active IS researchers barely register on the ISI scales. Moreover, it does not appear to be practicable to compare the scores of individual researchers or research groups, nor to generate any kind of notional multiplier for specialists in particular sub-disciplines or research domains.

In addition, comparisons between disciplines would be extraordinarily difficult. The following appear to be important factors, with observations about how IS compares with disciplines generally:

5.6 Alternative Methods

This sub-section considers alternative ways in which the ISI database could be applied to the purpose. Two other categories of search are available. One is 'Advanced Search', which provides a form of Boolean operation on some (probably inferred) meta-data. This had to be resorted to on several occasions, e.g. while investigating the Bjørn-Andersen and McLean anomalies. If a common evaluation method were able to be defined, it may be feasible to use 'Advanced Search' to apply it. The facility's features seem not to be entirely consistent with the conventions of computer science and librarianship, however, so considerable care would be needed to construct a reliable scheme.

The other category of search is called 'Cited Ref Search'. A 'General Search' counts citations only of papers that are themselves within the collection. The 'Cited Ref Search', on the other hand, counts all citations within papers in the collection. This delivers a higher score for those authors who have published papers in venues which are outside ISI's collection scope, but which are cited in papers that are within ISI's collection scope.

In order to test the likely impact of applying this alternative approach to IS researchers, 'Cited Ref Search' was used to extract data for a sub-sample of researchers in each category. In order to investigate the impact on researchers whose citation-counts fall behind the leading pack, several researchers were included in the sub-sample whose counts in the previous round fell below the 100 threshhold.

The design of the search-facility creates challenges because it makes only a small number of parameters available. For example, it does not enable restriction to <Address includes 'Australia'>. In addition, very little information is provided for each hit, the sequence provided is alphabetical by short journal title, and common names generate in excess of 1,000 hits. A further problem is that there is enormous variation in citation styles, and in the accuracy of the data in reference lists. This results in the appearance of there being far more articles than there actually are: many articles were found to have 2 or 3 entries, with instances found during the analysis of 5 and even 7 variants.

Exhibit 4 provides the results of this part of the study. The first three columns of the table show the number of citations for each author of articles that are in the ISI database, together with the count of those articles, and the largest citation-count found. (This data should correspond with that for the same researcher in Exhibits 2 and 3, but in practice there are many small variations, some caused by the 3-month gap between the studies that gave rise to those two tables and the study that gave rise to Exhibit 4). The next three columns show the same data for articles that are not in the ISI database. The final two columns show the sum of the two Citation-Count columns, and the apparent Expansion Factor, computed by dividing the Total Citations by the Citation-Count for articles in the ISI database.

Exhibit 4: ISI General Search cf. Cited Ref Search, April 2006

---- In ISI Database ----
-- Not in ISI Database -
Researcher
Citation-Count
Article-Count
Highest Cite-Count
Citation-Count
Article-Count
Highest Cite-Count
Total Citations
Expansion Factor
Sirkka Jarvenpaa
973
34
110
575
118
88
1548

1.6

Peter Keen
425
14
190
1625
325
463
2050

4.8

Eph McLean
132
14
41
84
29
42
216
1.6
    Corrected
132
14
41
532
20
448
664
5.0
---
---
---
---
---
---
---
---
David Avison (D)
66
9
46
99
37
16
165
2.5
Ron Stamper (R, RK)
59
13
16
255
149
19
314
5.3
Frank Land (F)
74
16
19
161
88
25
235
3.2
---
---
---
---
---
---
---
---
Iris Vessey
622
32
114
186
52
76
808
1.3
---
---
---
---
---
---
---
---
Philip Yetton
278
20
59
65
51
6
343
1.2
Peter Seddon (P, PB)
81
4
69
67
30
22
148
1.8
Graeme Shanks (G)
66
10
15
51
32
7
117
1.8
Paula Swatman (PMC)
43
4
31
102
39
23
145
3.4
Roger Clarke (R, RA)
41
11
17
176
131
8
217
5.3
Guy Gable (GG)
35
4
29
73
24
39
108
3.1

The data in Exhibit 4 enables the following inferences to be drawn:

Dependence on the General Search alone provides only a restricted measure of the impact or reputation of an academic. Moreover, it may give a seriously misleading impression of the impact of researchers who publish in non-ISI venues such as journals targetted at the IS profession and management, and books. To the extent that citation analysis of ISI data is used for evaluation purposes, a method needs to be carefully designed that reflects the objectives of the analysis.

Butler & Visser (2006) argue, however, that an antidote is available. On the basis of a substantial empirical analysis within the political science discipline, they conclude that the ISI collection can be mined for references to many types of publications, including books, book chapters, journals not indexed by ISI, and some conference publications. Replication of the study in the IS context would be needed before firm conclusions could be drawn. Google Scholar, an upstart alternative to ISI, is considered in the following section.


6. Google Scholar

Although Google Scholar was introduced in 2004, it is still an experimental service. From a bibliometric perspective, it is crude, because it is based on brute-force free-text analysis, without recourse to metadata, and without any systematic approach to testing venues for quality before including them. On the other hand, it has the advantages of substantial reach, ready accessibility, and popularity. It is inevitable that it will be used as a basis for citation analysis, and therefore important that it be compared against the more formal ISI database.

6.1 Method

The analysis presents considerable challenges. Searches generate long lists of hits, each of which is either an item indexed by Google, or is inferred from a citation in an item indexed by Google. The term 'item' is used in this case because, unlike ISI, Google indexes not only articles but also some books, some reports, and some conference proceeedings. As is the case with ISI, it appears that the 'citations' counted are actually the entries in the reference-list to each item, rather than the citations within the article's text. Each item has a citation-count shown, inferred from the index; and the hits appear to be sorted in approximate sequence of apparent citation-count, most first. Very limited documentation was found; and the service, although producing interesting and even valuable results, appears to be anything but stable.

Various approaches had to be experimented with, in order to generate useful data. From a researcher's perspective, Google's search facilities are among the weakest offered by search-engines, and it has a very primitive implementation of metadata. Because'A' and 'I' are stop-words in the indexing logic, searches for names including those initials required careful construction. The common words 'is' and 'it' are also stopwords, and hence it is difficult to use the relevant expressions 'IS' (for 'information systems') and 'IT' (for 'information technology') in order to restrict the hits to something more manageable. Search-terms of the form <"I Vessey" OR "Vessey I"> appeared to generate the most useful results. Experiments with searching based on article-titles gave rise to other challenges, in particular the need to develop an even richer starting-point for the analysis: a comprehensive list of article-titles for each researcher.

The method adopted was to conduct searches using the names of the same samples of researchers whose ISI-derived data appears in Exhibits 2 and 3. A very small sample was used, because of the resource-intensity involved, and the experimental nature of the procedure. A purposive sub-sample was selected, in order to avoid conflation among multiple authors and the omission of entries. Only the first 10 articles for each author were gathered (generally, but not reliably, those with the highest-citation-count).

The intensity of the 'multiple authors with the same name' problem is highly varied. For many of the researchers for whom data is presented below, there was no evident conflation with others, i.e. their Top-10 appeared on the first page of 10 entries displayed by Google Scholar. For a few, it was necessary to skip some papers, and move to the second or even third page. To reach Eph McLean's 10th-ranked paper, it was necessary to check 60 titles, and to reach Ron Weber's 10th, 190 titles were inspected. The check of this author's own entries was more problematical, and is further discussed below.

The aim of the research was twofold. It was important to assess the usefulness of Google Scholar citation-analysis as a means of measuring the impact of IS researchers. In addition, insight was sought into the quality of the ISI data. The report in this section addresses both aims.

6.2 Quality Assessment

Considerable caution must be applied in using Google Scholar as a means of assessing researcher impact, or of judging the quality of the ISI service. Google's data and processes are new, in a state of flux, unaudited, and even less transparent than ISI's. A number of experiments were undertaken in order to gain an insight into the accuracy and reliability of Google Scholar results.

(1) A Deep, Person-Specific Test

A preliminary test was conducted on my own, rather common name. As before, the justification for this is the need for confidence in recognising matches and non-matches.

The search resulted in 12,700 hits in April 2006 (but 34,800 when the experiment was repeated in June 2007). To reach the 10th-most-cited of my own papers, it was necessary to inspect the first 558 entries. The challenges involved in this kind of analysis are underlined by the fact that those first 558 entries included a moderate number of papers by other R. Clarkes on topics and in literatures that are at least adjacent to topics I have published on and venues I have published in. These could have easily been mistakenly assigned to me by a researcher who lacked a detailed knowledge of my publications list. Similarly, false-negatives would have easily arisen. There are many researchers with common names, and hence accurate citation analysis based on name alone is difficult to achieve.

A further experiment was conducted in June 2007 to check the effectiveness of more restrictive search-terms. The term <information OR privacy author:Clarke author:R> was used in an endeavour to filter out most extraneous papers without losing too many genuine ones. The 10th most-cited paper was then found at number 33 of 7,380, rather than 558 of 34,800. The total citations for those 10 papers was 481 (cf. 417 when the search was first performed 14 months earlier). The lowest counts of the 10 were 16, 19 and 22; but later counts were larger (30 at no. 59, 37 at no. 74, and 25 at no. 112); so the sequencing is not reliably by citation-count, and there is no apparent way to influence the sequence of presentation of the results of a search.

The comprehensiveness of the coverage was tested by continuing the scan across the first 1,000 entries. (Google Scholar does not appear to enable display of any more than the first 1,000 hits). This identified a total of 117 items with 1,365 citations (about a dozen of which represented double-counting of publications, although apparently not of citations).

The extent to which the search-term missed papers was tested using its complement, i.e. < -information -privacy author:Clarke author:R>. A scan of the first 1,000 entries of 19,900 detected only 7 papers missed from the preceding search (numbers 161, 531, 534, 690, 792, 795 and 861) with a total of 76 citations (respectively 27, 10, 10, 8, 7, 7 and 7).

These experiments together suggest that a person-specific test is feasible, but that it is infeasible to automate it in the first instance, and would be very challenging to fully automate it in order to support periodic re-calculation.

(2) Multiple, Person-Specific Tests

In order to assess the implications for researchers more generally, a sub-set of 7 of the Australian researchers was selected, including 3 of the 7 leaders and 4 of those whose ISI counts fell below the threshhold. Their top-10 Google citation-counts were extracted in April 2006, and comparison made with the ISI results. In each case, careful comparison was necessary, to ensure accurate matching of the articles uncovered by Google against those disclosed by ISI. The data is shown in tables A1 to A7, in Appendix A.

Google finds many more items than ISI, and finds many more citations of those items than ISI does. In the sample, the ISI count includes only 39/70 items, and even for those 39 the total ISI citation-count is only 45% of the total Google citation-count.

To some extent, this is a natural result of the very different approaches the two services adopt: indiscriminate inclusiveness on the one hand, and narrow exclusivity on the other. However, a number of aspects throw serious doubt on the adequacy of ISI as a basis on which to assess IS academics' research impact:

(3) Anomaly Investigations

The citation-counts were further examined for several researchers whose ISI counts had been lower than this author had anticipated.

Ron Stamper (as R. and R.K.) generated only 32 citations from 13 articles on ISI. On Google Scholar the count in April 2006 was 511 citations of 36 articles (the largest single count being 60), plus 64 citations of 1 book, for a total of 575 citations. The scan found those 37 relevant entries among the first 100 hits of a total of 7,970 hits in all, and doubtless somewhat under-counts. A repeat of the experiment in June 2007, using the search-term <author:Stamper author:R*>, found 33 relevant entries among the first 100 hits of a total of only 830 entries, but for a total of 677 citations, or 18% more than 14 months earlier. An expansion rate of a factor of 18 from ISI to Google is extreme, and suggests that this particular researcher's specialisations are very poorly represented in ISI's collection.

David Avison generated under 100 citations on ISI, including 56 for a CACM paper in 1999. On Google Scholar, that paper alone generates 219 citations, an Australian Computer Journal article 199, three IT&P papers around 50 each, another CACM 43, and a book 518. A researcher whose Google Scholar scores are an h-index of 13, and an h citation-total of 1,282, fell well below the 100-citation cut-off used for Exhibit 3.

(4) An Article-Specific Test

A further experiment was conducted in order to test the impact of ISI's collection-closedness in comparison with Google's open-endedness. Delone & McLean's 'Information Systems Success: The Quest for the Dependent Variable' was sought by keying the search-term <E McLean W DeLone> into Google Scholar, and critically considering the results. The test was performed twice, in early April 2006 and late April 2006. The results differed in ways that suggested that, during this period, Google was actively working on the manner in which its software counts citations and presents hits. The later, apparently better organised counts are used here. The analysis was complicated by the following:

The raw results comprised 824 citations for the main entry (and a total of 832 hits). Based on a limited pseudo-random sample from the first 40, many appeared to be indeed attributable to the paper. This is a citation-count of a very high order. An indication of this is that the largest ISI citation-count for an IS paper that was located during this research was 296, for a paper in CACM by Lynne Markus. In Google, that paper scored 472. So the Delone & McLean paper scored 75% more Google-citations than the Google-citation score of the highest-ranked IS paper that had otherwise been located in the ISI database during the course of the research.

The experiment was repeated in June 2007, with significantly different results. One change was that the output was far better organised than 14 months earlier, with most duplications removed and apparently consolidated into a single entry. The second was that the citation-count was 1,166 in the principal entry (plus 22 more in a mere 4 other entries). This represented an increase of 44% on the citation-count 14 months before.

It appears that, as a result of what is quite possibly data capture error, ISI denies the authors the benefit of being seen to have co-authored one of the most-referenced papers in the entire discipline. (Further information on highly-cited papers is provided in the following section). More generally, it is not straightforward to confidently construct searches to determine citation-counts for specific papers.

6.3 Google and Leaders

Google was assessed as a potential vehicle for score-keeping by extracting and summarising Google citations for the same sets of academics as were reported on in Exhibits 2 (Australian researchers) and 3 (a few leading international researchers). In both cases, the sequence in which the researchers is listed is the same as in the earlier Exhibit. This part of the analysis was conducted in June 2007.

The approach adopted differs from that taken with ISI. Exhaustive counting of all papers was trialled, but the nature of the data and the limited granularity of Google's search-tools make it a very challenging exercise. Because substantially more data is available, use of the h-index is feasible. Moreover it is advantageous, because it has the effect of curtailing the search.

The preliminary trial was performed on data relating to this author's own publications. After constructing what appeared to be an efficient mechanism, it was still necessary to scan 2,000 entries in order to extract 124 items totalling 1,441 citations. Application of the h-index had the effect of limiting the search to only the first 18 of the 124 items. The 106 papers omitted had at most 17 citations each and an average of only 7, so my citation-count was reduced by a little over half, from 1,441 to 684. On the other hand, I am relatively prolific, in both the good and bad senses of the term, and hence most authors would lose far less than half of their citation-count; and the 'long tail' is in any case far less significant than the high-impact papers at the top of the list.

The data shown in Exhibits 5 and 6 accordingly comprises Hirsch's h-index, the citation-count for the researcher's publications that were included in the h-index, and the largest per-item count found in that set.

Exhibit 5: Google Data for Leading Australian IS Researchers, June 2007

CAVEATS:

  1. This data is based on an experimental service, and a collection of undeclared extent
  2. The scores for researchers marked with asterisks may be seriously under-stated due to the non-inclusion of the proceedings of conferences at the technical end of the IS discipline

h-Index
Citation Count of h Items

Largest Per-Item Count

Expatriates   
Iris Vessey
26
1,481
196
Rick Watson, since 1989
26
1,755
236
Ted/Ed Stohr
15
1,077
689
Peter Weill, since 2000
21
1,673
268
Locals
** Marcus O'Connnor
11
357
84
Ron Weber
20
1,469

172

Philip Yetton, since 1975
16
1,000
488
** Michael Lawrence
12
252
42
Michael Vitale, since 1995
14
1,152
296
** Ross Jeffery
18
763
76
Marianne Broadbent
13
1,002
230

Exhibit 6: Google Data for A Few Leading International IS Academics, June 2007

CAVEATS:

  1. This data is based on an experimental service, and a collection of undeclared extent
  2. The scores for researchers marked with asterisks may be seriously under-stated due to the non-inclusion of the proceedings of conferences at the technical end of the IS discipline

h-Index
Citation Count of h Items

Largest Per-Item Count

North American
  
Lynne Markus
36
4,664
591
Izak Benbasat
36
4,589
790
Dan Robey
33
3,245
539
Sirkka Jarvenpaa
34
4,406
635
Detmar Straub
23
2,708
405
Rudy Hirschheim
33
3,159
326
Gordon Davis
21
2,327
484
Peter Keen
21
2,012
350
** Sal March
11
472
187
Eph(raim) McLean
18
1,950
1,166
Europeans
Kalle Lyytinen, USA since 2001
31
2,511
285
Leslie Willcocks
34
2,286
208
Trevor Wood-Harper
15
779

199

Bob Galliers, USA since 2002
24
1,411

161

Guy Fitzgerald
16
1,034
518
Enid Mumford
21
1,286
178

Hirsch is reported in Mayo (2007) to have suggested that an ISI h-index of 20 indicates a 'successful' physicist, and 40 an 'outstanding' physicist. Similar heuristics for IS could be suggested as a Google h-index of 15 for 'successful' and 25 for 'outstanding'. On the other hand, even the h-index represents mechanistic reductionism, purporting to reflect a complex, multi-dimensional reality in a single measure.

Several further experiments were performed, in order to gain an insight into the highest citation-counts for individual articles in the IS discipline. Walstrom & Leonard (2000, Table 7) identified 10 highly-cited papers published between 1988 and 1994. Google showed citation-counts ranging from 49, 111 and 142, up to 712, 743 and 1,468, with a mean of 484. Markus (1983), the object of study of Hansen et al. (2006) showed a Google citation-count in June 2007 of 602. Hansen et al. also drew attention to DeSanctis & Poole (1994), which showed 718 Google citations.

Based on this loose sampling method, in June 2007, many well-known papers were scoring in the 700s, and Delone & McLean (1993) in the 1100s. The article with the standout citation-count was Davis et al. (1989), in Mngt. Sci., with 1,468. Of the three authors of that paper, two do not publish in the IS literature, but the other, Fred (F.D.) Davis does. Examination of his Google Scholar citation-counts disclosed an article with a yet-higher citation-count - Davis (1989), in MISQ - which scored 2,516 citations. Davis' citation-count of h-items (column 2 of Exhibit 6) was 7,161, considerably higher than the highest otherwise encountered during this study. Davis' h-index was, however, rather lower than for many of the other leading researchers in the sample, at 22.

It appears that a great many of the citations of Davis' two front-running papers are from researchers in disciplines cognate with IS rather than from within IS. They are of course 'legitimate' as measures of impact; but the size of the counts underlines the fact that an IS researcher who attracts attention from academics outside IS may achieve a significantly higher impact-measure, particularly where the population of the adjacent discipline(s) is large.

6.4 Google and 'the Middle Ground'

ISI provided results for leading IS academics that were incomplete and misleading, but not entirely untenable, at least in the sense that the relativities among members of the sample are roughly maintained across the ISI and Google measures. ISI was of no value, however, for researchers outside the narrow band of well-established leaders with multiple publications in AA journals.

The Google Scholar data is deeper and finer-grained than that extracted from ISI. A test was therefore undertaken to determine whether meaningful impact measures could be generated from Google citation-counts for the next level of researchers. This was done using a purposive sub-sample of 8 Australians, all of whom are chaired or chairworthy, but whose ISI citation-counts fell between 34 and 70 - well below the threshhold of 100 used in Exhibits 2 and 3.

In Exhibit 7 below, the comparative data for the 8 researchers is shown, sorted in descending order of ISI citation-count. The ISI data in the left-hand section can be seen to be too thin to support any meaningful analysis. The Google data in the right-hand section, on the other hand, is much deeper, and hence appears to be capable of more reliable interpretation.

Exhibit 7: ISI/Google Comparison for 'the Middle Ground', June 2007

 
ISI-Derived Data
 
Google-Derived Data
Citation Count
Number of Articles

Largest Per-Article Count

 
h-Index
Citation Count of h Items

Largest Per-Item Count

Peter Seddon (P, PB)
70
6
60
 
13
674
220
Graeme Shanks (G)
61
13
14
 
13
469
114
Paula Swatman (PMC)
53
9
29
 
15
597
165
Roger Clarke (R, RA)
44
22
16
 
18
684
113
Michael Rosemann (M), since 1999
41
14
18
 
17
618
97
Chris Sauer (C), until 2000
40
18
13
 
11
493
180
Simpson Poon (S), until 2003
37
3
29
 
12
496
165
Guy Gable (GG)
34
9
23
 
12
516
141

The Google data shows substantial h-indices which overlap with the sample of leaders in earlier tables. The citation-counts of between 469 and 684, on the other hand, and the largest per-item counts of between 97 and 220, are noticeably lower. Although ranking within this sample of 'Middle Ground' researchers would be contentious, the data provides a basis for comparison both with the impact leaders in the main samples above, and with other researchers.

6.5 Google and 'Early-Career Researchers'

If citation-counts are to be used for resource-allocation, then there has to be some 'rite of passage' whereby new leading researchers can emerge.

On the one hand, it would appear to be futile to depend on the historical record of citations to play a part, because of the long lead-times involved, and the challenges of performing research without already having funding to support it. Nonetheless, it appeared to be necessary to perform some experimentation, to see whether some metric might be able to be devised. For example, if the data was sufficiently finely-grained, it might be feasible to use econometric techniques to detect citation-count growth-patterns. Alternatively, counts of article downloads might be used (Harnad & Brody 2004).

A modest sample of 'early-career researchers' was prepared, who had come to the author's notice through multiple refereed publications and award-winning conference papers. All suffered the same problem within the Google Scholar collection that even Middle-Ground researchers suffered within ISI: the data was simply too shallow to enable any meaningful analysis to be performed.

Hansen et al. (2006, Figure 1) shows the timeline of citations of Markus (1983). The distribution was roughly symmetrical over its first decade, peaking after 5 years, with total citations about 5 times the count in its peak year (although the measures are confounded by a 'mid-life kicker'published in its 6th year). A total of 9 citations in the first 2-1/2 years indicates no early signs of the article becoming a classic, and hence the author's subsequent eminence could not have been 'predicted' at that time by an analysis of citation-counts for this article. Clearly, much more research is needed; but citation-analysis appears to be an unpromising way of discovering 'rising stars'.

6.6 Tentative Conclusions

Some perspective on what 100 citations means in the IS discipline can be gained from an assessment of the total citation-count of the top 10 items that are discovered in response to some terms of considerable popularity in recent years. The terms were not selected in any systematic manner. The count of the apparently 10th-most-cited article containing the expression is also highlighted, because it provides an indication of the depth of the heavily-cited literature using that term:

Following discovery of the highly-cited Fred Davis papers, this study was applied in July 2007 to the relevant key-words-in-title, producing a contender for the term with the highest top-10 citation-count (but not the highest count for the 10th most cited article):

Re-tests in July 2007 showed substantial growth in Google citation-counts during the intervening 14 months of 67% for "strategic alignment" to 1,186, 61% for "key issues in information systems management" to 1,499, 53% for "citation analysis" AND "information systems"to 581, 50% for "technology acceptance model" to 3,127, and 46% for "B2B" to 1,395. "B2C", on the other hand, had grown only 12% to 208. These changes might result from considerable expansion in the Google Scholar catchment, an explosion in IS publications and/or the existence of bandwagon effects in IS research.

Even in topic-areas that are mainstream within the discipline, relatively few items currently achieve 100 citations (38 in the above sample of 13 topic-areas in early 2006, and only a few more by mid-2007). Moreover, many topics either fail to attract any more interest, or subsequent researchers do not develop a 'cumulative tradition' in that they fail to cite predecessor papers. Hence citation-counts above about 75-100 on Google Scholar appear to currently indicate a high-impact article, and above 40-50 a significant-impact article. Appropriate threshholds on ISI General Search would appear to be somewhat less than half of those on Google, perhaps 50 and 20. These threshholds are of course specific to IS, and other levels would be likely to be appropriate in other disciplines, and perhaps in (M)IS in the USA. Such threshholds are also limited to a period of perhaps 2-3 years, because of the growth inherent in citation-counts at this early stage in the maturation of both the discipline and the databases on which the analysis depends.

Another aspect of interest is the delay-factor before citations begin to accumulate. Some insight was gained from an informal sampling of recent MISQ articles, supplemented by searches for the last few years' titles of this author's own refereed works. A rule of thumb appears to be that there is a delay of 6 months before any citations are detected by Google, and of 18 months before any significant citation-count is apparent. The delay is rather longer on ISI General Search. This is to be expected, because of the inclusion of edited and lightly-refereed venues in Google, which have a shorter review-and-publication cycle than ISI-included journals, most of which are heavily refereed. Further understanding of citation accumulation patterns will depend on the development of disciplined and repeated extractions from the citation services.


7. Implications

Reputation and impact are highly multi-dimensional constructs, and reduction to a score is morally dubious, intellectually unsatisfying, and economically and practically counter-productive. On the other hand, the frequency with which a researcher's publications are cited by other authors is a factor that an assessment of reputation would ignore at its peril.

7.1 Deficiencies in Citation Analysis

The research presented in this paper has demonstrated that there are enormous problems to be confronted in applying currently available databases to the purpose. Exhibit 8 summarises them.

Exhibit 8: Deficiencies in Citation Analysis as a Means of Assessing Researcher Impact

Citation Databases, particularly ISI

The Articles

Contextual Factors

7.2 Coping Mechanisms

Large collections of data are now available to be mined, and members of the search and selection committees that are responsible for the appointment of key staff-members are applying citation analysis as part of their assessment of prospective appointees. It is reasonable to expect that evaluators within and adjacent to the IS discipline are incorporating subtlety, sophistication and insight into their work.

Administrators and resource-allocators, on the other hand, might well not do so, particularly where impact assessment is institutionalised, as in the U.K. RAE, the N.Z. PBRF and the Australian RQF schemes. These are political mechanisms aimed at focussing research funding on a small proportion of research centres within a small proportion of institutions. They are mass-production exercises, and are subject to heavily bureaucratic processes and definitions. Citation analysis used in such processes is inevitably largely mechanical, with simple rules applied to all disciplines, irrespective of their appropriateness.

Given the inevitability that citation analysis will be used in unreasonable ways, it would be prudent for the IS discipline to develop and publish norms that will mitigate the harm. External evaluators can be legitimately challenged, if necessary through legal process, if they blindly apply general rules to a discipline that has established and well-grounded evaluation processes. Exhibit 9 presents some rules of thumb that are suggested by the pilot analysis reported on in this paper.

Exhibit 9: Heuristics for Citation Analysis in IS

The discipline as a whole, through its professional body, could undertake measures that would be instrumental in the emergence of an effective framework for score-keeping. Exhibit 10 suggests some measures that arise from the analysis conducted in this paper.

Exhibit 10: Actions the AIS Can Take


8. Conclusions

There may be a world in which the electronic library envisioned by Bush and Nelson has come into existence, and in which all citations can be counted, traced, and evaluated.

Back in the real world, however, the electronic library is deficient in a great many ways. It is fragmented and very poorly cross-linked. And the interests of copyright-owners (including discipline associations but particularly the for-profit corporations that publish and exercise control over the majority of journals) are currently building in more and more substantial barriers rather than working towards integration. It remains to be seen whether that will be broken down by the communitarian open access movement, or by the new generation of corporations spear-headed by Google.

Simplistic application of raw citation-counts to evaluate the performance of individual researchers and of research groupings would disadvantage some disciplines, many research groupings, and many individual researchers. The IS discipline is highly exposed to the risk of simplistic application of citation analysis. For the many reasons identified in this paper, citation-counts will suggest that most IS researchers fall short of the criteria demanded for the higher rankings. As a result, the IS discipline in at least some countries is confronted by the spectre of reduced access to research funding, as a result of unreasonable application of citation analysis. Citation analysis is currently a very blunt weapon, which should be applied only with great care, but which appears very likely to harm the interests of the less politically powerful disciplines such as IS.


References

Except where otherwise stated, URLs were last accessed in early April 2006.

Adam D. (2002) 'Citation analysis: The counting house' Nature 415 (2002) 726-729

ARC (2006) 'Research Fields, Courses and Disciplines Classification (RFCD)' Australian Research Council, undated, apparently of 21 February 2006, at http://www.arc.gov.au/apply_grants/rfcd_seo_codes.htm

Bush V. (1945) 'As We May Think' The Atlantic Monthly. July 1945, at http://www.theatlantic.com/doc/194507/bush

Butler L. & Visser M. (2006) 'Extending citation analysis to non-source items' Scientometrics 66, 2 (2006) 327-343

Cheon M.J., Lee C.C. & Grover V. (1992) 'Research in MIS - Points Of Work and Reference - A Replication and Extension of the Culnan and Swanson Study' Data Base 23, 2 (September 1992) 21-29

Clarke R. (Ed.) (1988) 'Australian Information Systems Academics: 1988/89 Directory' Australian National University, November 1988

Clarke R. (Ed.) (1991) 'Australasian Information Systems Academics: 1991 Directory' Australian National University, April 1991

Clarke R. (2006a) 'Plagiarism by Academics: More Complex Than It Seems' J. Assoc. Infor. Syst. 7, 2 (February 2006), at http://www.rogerclarke.com/SOS/Plag0506.html

Clarke R. (2006b) 'Key Aspects of the History of the Information Systems Discipline in Australia' Australian Journal of Information Systems 14, 1 (November 2006) 123-140, at http://dl.acs.org.au/index.php/ajis/article/view/12/11

Clarke R. (2007) 'A Retrospective on the Information Systems Discipline in Australia: Appendix 4: Professors' Xamax Consultancy Pty Ltd, March 2007, at http://www.rogerclarke.com/SOS/AISHistApp4-0703.html

Clarke R. & Kingsley D. (2007) 'ePublishing's Impacts on Journals and Journal Articles' Xamax Consultancy Pty Ltd, April 2007, http://www.rogerclarke.com/EC/ePublAc.html

Cooper R.B., Blair D. & Pao M. (1993) 'Communicating MIS research: a citation study of journal influence' Infor. Processing & Mngt 29, 1 (Jan.-Feb. 1993) 113 - 127  Culnan M.J. (1978) 'Analysis of Information Usage Patterns of Academics and Practitioners in Computer Field - Citation Analysis of a National Conference Proceedings' Infor. Processing & Mngt 14, 6 (1978) 395-404

Culnan M.J. (1986) 'The Intellectual-Development of Management-Information-Systems, 1972-1982 - A Cocitation Analysis' Mngt Sci. 32, 2 (February 1986) 156-172

Culnan M.J. (1987) 'Mapping the Intellectual Structure of MIS, 1980-1985: A Co-Citation Analysis' MIS Qtly 11, 3 (September 1987) 341-353

Culnan M.J. & Swanson E.B. (1986) 'Research In Management-Information-Systems, 1980-1984 - Points Of Work And Reference' MIS Qtly 10, 3 (September 1986) 289-302

Davis F.D. (1989) 'Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology' MIS Quarterly 13, 3 (September 1989) 319-340

Davis F.D., Bagozzi R.P. & Warshaw P.R. (1989) 'User Acceptance of Computer Technology: A Comparison of Two Theoretical Models' Mngt. Sci. 35, 8 (August 1989) 982-1003

DeSanctis G. & Poole M.S. (1994) 'Capturing the Complexity in Advanced Technology Use: Adaptive Structuration Theory' Organization Science, 5, 2 (May 1994) 121-147

DEST (2005) 'Research Quality Framework: Assessing the quality and impact of research in Australia: Final Advice on the Preferred RQF Model' Department of Education, Science & Training, December 2005 , at http://www.dest.gov.au/sectors/research_sector/policies_issues_reviews/key_issues/research_quality_framework/final_advice_on_preferred_rqf_model.htm

DEST (2006) 'Research Quality Framework Guidelines Scoping Workshop : Workshop Summary' Department of Education, Science & Training, 9 February 2006 , at http://www.dest.gov.au/NR/rdonlyres/1D5A7163-A754-48B7-88B7-5530F486EDD4/9772/RQFGuidelinesScopingWorkshopOutcomesFINAL17March06.pdf

DEST (2007) 'Research Quality Framework Fact Sheet' undated, but apparently of May 2007, at http://www.dest.gov.au/sectors/research_sector/policies_issues_reviews/key_issues/research_quality_framework/documents/RQF_factsheet_2006/RQF_pdf.htm, accessed 30 June 2007

EJIS (2007) 'Special Section on the European Information Systems Academy' Euro. J. Infor. Syst. 16, 1 (February 2007), at http://www.palgrave-journals.com/ejis/journal/v16/n1/index.html

Eom S.B. (1996) 'Mapping The Intellectual Structure Of Research In Decision Support Systems Through Author Cocitation Analysis (1971-1993)' Decision Support Systems 16, 4 (April 1996) 315-338

Eom S.B., Lee S.M. & Kim J.K. (1993) 'The Intellectual Structure Of Decision-Support Systems (1971-1989)' Decision Support Systems 10, 1 (July 1993) 19-35

Gable G. & Clarke R. (Eds.) (1994) 'Asia Pacific Directory of Information Systems Researchers: 1994' National University of Singapore, 1994

Gable G. & Clarke R. (Eds.) (1996) 'Asia Pacific Directory of Information Systems Researchers: 1996' National University of Singapore, 1996

Galliers R.D. & Meadows M. (2003) 'A Discipline Divided: Globalization and Parochialism in Information Systems Research' Commun. Association for Information Systems 11, 5 (January 2003) 108-117, at http://cais.aisnet.org/articles/default.asp?vol=11&art=5, accessed July 2007

Garfield E. (1964) 'Science Citation Index - A New Dimension in Indexing' Science 144, 3619 (8 May 1964) 649-654 , at http://www.garfield.library.upenn.edu/essays/v7p525y1984.pdf

Garfield E. (1977) 'Can citation indexing be automated?' in 'Essays of an information scientist' ISI Press, Philadelphia PA, 1977, pp. 84-90, quoted in Hansen et al. (2006)

Hansen S., Lyytinen K. & Markus M.L. (2006) 'The Legacy of 'Power and Politics' in Disciplinary Discourse' Proc. 27th Int'l Conf. in Infor. Syst., Milwaukee, December 2006, at http://aisel.aisnet.org/password.asp?Vpath=ICIS/2006&PDFpath=EPI-IS-01.pdf, accessed July 2007

Harnad S. & Brody T. (2004) 'Comparing the impact of open access (OA) vs. non-OA articles in the same journals' D-Lib 10, 6 (June 2004), at http://www.dlib.org/dlib/june04/harnad/06harnad.html, accessed July 2007

Harzing A.-W. (2007) 'Reflections on the h-index' University of Melbourne, 25 June 2007, at http://www.harzing.com/pop_hindex.htm, accessed July 2007

Hauffe H. (1994) 'Is Citation Analysis a Tool for Evaluation of Scientific Contributions?' Proc. 13th Winterworkshop on Biochemical and Clinical Aspects of Pteridines, St.Christoph/Arlberg, 25 February 1994, at http://www.uibk.ac.at/ub/mitarbeiter_innen/publikationen/hauffe_is_citation_analysis_a_tool.html

Hirsch J.E. (2005) 'An index to quantify an individual's scientific research output' arXiv:physics/0508025v5, 29 September 2005, at http://arxiv.org/PS_cache/physics/pdf/0508/0508025v5.pdf, accessed July 2007

Holsapple C.W., Johnson L.E., Manakyan H. & Tanner J. (1993) 'A Citation Analysis Of Business Computing Research Journals' Information & Management 25, 5 (November 1993) 231-244

Katerattanakul P. & Han B. (2003) 'Are European IS journals under-rated? an answer based on citation analysis' Euro. J. Infor. Syst. 12, 1 (March 2003) 60-71

Keen P.G.W. (1980) 'MIS Research: Reference Disciplines and a Cumulative Tradition' Proc. 1st Int'l Conf. on Information Systems, Philadelphia, PA, December 1980, pp. 9-18

Lamp J. (2005) 'The Index of Information Systems Journals', Deakin University, version of 16 August 2005, at http://lamp.infosys.deakin.edu.au/journals/index.php

Leydesdorff L. (1998) 'Theories of Citation?' Scientometrics 43, 1 (1998) 5-25, at http://users.fmg.uva.nl/lleydesdorff/citation/index.htm

MacLeod D. (2006) 'Research exercise to be scrapped' The Guardian, 22 March 2006, at http://education.guardian.co.uk/RAE/story/0,,1737082,00.html

MacRoberts M.H. & MacRoberts B.R. (1997) 'Citation content analysis of a botany journal' J. Amer. Soc. for Infor. Sci. 48 (1997) 274-275

Markus M.L. (1983) 'Power, Politics and MIS Implementation' Commun. ACM 26, 6 (June 1983) 430-444

Meho L.I. (2007) 'The Rise and Rise of Citation Analysis' Physics World (January 2007), at http://dlist.sir.arizona.edu/1703/01/PhysicsWorld.pdf, accessed July 2007

Meho L.I. & Yang K. (2007) 'A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar', Forthcoming, Journal of the American Society for Information Science and Technology, at http://dlist.sir.arizona.edu/1733/, accessed July 2007

PBRF (2005) 'Performance-Based Research Fund', N.Z. Tertiary Education Commission, July 2005, at http://www.tec.govt.nz/downloads/a2z_publications/pbrf2006-guidelines.pdf

Perkel J.M. (2005) 'The Future of Citation Analysis' The Scientist 19, 20 (2005) 24

RAE (2001) 'A guide to the 2001 Research Assessment Exercise', U.K. Department for Employment and Learning, apparently undated, at http://www.hero.ac.uk/rae/Pubs/other/raeguide.pdf

RAE (2005) 'Guidance on submissions' Department for Employment and Learning, RAE 03/2005, June 2005, at http://www.rae.ac.uk/pubs/2005/03/rae0305.pdf

Saunders C. (2005) 'Bibliography of MIS Journals Citations', Association for Information Systems, undated but apparently of 2005, at http://www.isworld.org/csaunders/rankings.htm

Schlogl C. (2003) 'Mapping the intellectual structure of information management' Wirtschaftsinformatik 45, 1 (February 2003) 7-16

Vessey I., Ramesh V. & Glass R.L. (2002) 'Research in information systems: An empirical study of diversity in the discipline and its journals' J. Mngt Infor. Syst. 19, 2 (Fall 2002) 129-174

Walstrom K.A. & Leonard L.N.K. (2000) 'Citation classics from the information systems literature' Infor. & Mngt 38, 2 (December 2000) 59-72

Whitley E.A. & Galliers R.D. (2007) 'An alternative perspective on citation classics: Evidence from the first 10 years of the European Conference on Information Systems' Information & Management 44, 5 (July 2007) 441-455


Appendix A: ISI cf. Google Comparisons for Selected Researchers

This Appendix provides detailed comparisons of results extracted from both ISI and Google Scholar. Seven Australian academics were selected, from among both expatriates and local researchers. All but one were selected because of their relatively uncommon names, in order to ease the difficulties of undertaking the searches and thereby achieve reasonable quality in the data, and no significance should be inferred from inclusion in or exclusion from this short list. One, this author, was selected because, for researchers with common names, full knowledge of the publications list makes it much easier to confidently achieve a reasonable degree of accuracy.

In each of the following tables:

Exhibit A1: ISI cf. Google - Iris Vessey, January/April 2006

Google Count
Thomson Count
Venue
145
111
Journal
92
Unindexed (!!)
Journal (ISR)
88
83
Journal
86
Unindexed (!!)
Journal (CACM)
56
26
Journal
52
Unindexed
Conference (ICIS)
52
Unindexed
Journal (IJMMS)
48
Unindexed (!!)
Journal (CACM)
41
Unindexed (!!)
Journal (IEEE Software)
31
9
Journal
691 or 320
229 (of 601)
Totals

Exhibit A2: ISI cf. Google - Ron Weber, January/April 2006

Google Count
Thomson Count
Venue
125
38
Journal
102
Unindexed
Journal (JIS)
106
30
Journal
87
36
Journal
72
26
Journal (Commentary)
65
20
Journal (Commentary)
45
Unindexed
Book
34
Unindexed
Journal (JIS)
34
22
Journal
31
24
Journal
701 or 520
196 (of 328)
Totals

Exhibit A3: ISI cf. Google - Philip Yetton, January/April 2006

Google Count
Thomson Count
Venue
302
Unindexed
Book
55
11
Journal
42
12
Journal
32
12
Journal
31
34
Journal (1988)
27
Unindexed
Book
26
57
Journal (1982)
20
23
Book
18
6
Journal (1985)
18
Unindexed
Government Report
571 or 224
155 (of 270)
Totals

Exhibit A4: ISI cf. Google - Peter Seddon, January/April 2006

Google Count
Thomson Count
Venue
133
60
Journal
47
Unindexed
Journal (CAIS)
43
Unindexed
Conference (ICIS)
33
Unindexed
Conference
22
Unindexed (!)
Journal (DB, 2002)
24
2
Journal (I&M, 1991)
18
2
Journal
18
Unindexed
Journal (JIS)
13
Unindexed
Conference (ECIS)
9
0
Journal (JIT, Editorial)
360 or 184
64 (of 70)
Totals

Exhibit A5: ISI cf. Google - Paula Swatman, January/April 2006

Google Count
Thomson Count
Venue
117
29
Journal
73
Unindexed
Journal (Int'l Mkting Rev)
61
Unindexed
Journal (TIS)
43
Unindexed
Journal (JSIS)
29
Unindexed (!)
Journal (IJEC)
26
12
Journal
26
Unindexed
Journal (JIS)
24
6
Journal
22
Unindexed
Conference
20
Unindexed
Journal (EM)
441 or 167
47 (of 53)
Totals

Exhibit A6: ISI cf. Google - Roger Clarke, January/April 2006

Position
Google Count
Thomson Count
Venue
57
81
14
Journal
59
85
16
Journal
102
60
Unindexed
Journal (IT&P)
148
47
Unindexed
Journal (TIS)
253
33
Unindexed
Conference
325
28
Unindexed
Conference
373
25
3
Journal
407
23
Unindexed
Journal (JSIS)
539
18
Unindexed
Journal
558
17
Unindexed
Conference
417 or 191
33 (of 44)
Totals

Exhibit A7: ISI cf. Google - Guy Gable, January/April 2006

Google Count
Thomson Count
Venue
102
Unindexed (!)
Journal (EJIS, 1994)
56
Unindexed (!)
Journal (ISF, 2000)
40
Unindexed
Journal (JGIM)
27
6
Journal (MS)
27
Unindexed
Conference
24
23
Journal (I&M, 1991)
23
Unindexed
Conference
14
Unindexed
Conference
13
Unindexed
Conference
10
Unindexed
Conference
336 or 51
29 (of 34)
Totals


Acknowledgements

The work reported on in this paper was conducted within the context of a major collaborative project on the IS discipline in Australia, led by Guy Gable and Bob Smyth at QUT, and reported on in a Special Issue of the Australian Journal in Information Systems 14, 1 (2006).

Dave Naumann provided assistance in relation to data from the AISWorld Faculty Directory. The paper has benefited from feedback from colleagues within and beyond the team. All researchers mentioned in the paper were invited to comment on the draft and many of their suggestions have been incorporated. Comments by Peter Seddon of the University of Melbourne and Linda Butler of the ANU were particularly valuable. Responsibility for all aspects of the work rests, of course, entirely with the author.


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., a Visiting Professor in the E-Commerce Programme at the University of Hong Kong, and a Visiting Professor in the Department of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 1 April 2006 - Last Amended: 22 July 2007 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/CitAnal0707.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy