A set of metrics based on the social web that is used to track, measure and analyse scholarly outputs and publications.
A reference or measurement standard for comparison of performance. The performance level is recognised as the standard of excellence. A synonym for "Benchmark" could also be "Target", as the value of an indicator to be achieved at a specified time.
Systematically measure and compare using a process of comparing the research,individual, or entity of focus to an appropriate comparison in a similar entity, research area, stage of career, etc. Publications are typically benchmarked using Web of Science, InCites, Scopus, Scival, Publish or Perish, Dimensions, etc.
A set of quantitative methods used to measure, track and analyse scholarly literature by using mathematical and statistical methods. The term was invented in the late 1960s but the methods were used already in the 1940s.
A reference to the work of another author making clear the influence of the previously authored work on the most recent/current publication. The reference helps locating the the original work in the corpus of literature
Examination of a dataset of citations to establish patterns of citations. Best evidence sampling methods should be used. Citation analysis is found in much of the research that is published within the bibliometrics field
# of citations accrued since publication - As with all citation-based measures, it is important to be aware of citation practices. "Effective strategies for increasing citation frequency" lists 33 different ways to increase citations.
The 99th percentile is high, and indicates a document in the top 1% globally.
Can be created in several formats, structures or using a variety of underlying citation, bibliometric or benchmarking data. Individuals requesting or using citation reports needs to further evaluate, refine and integrate data found in citation reports, creating a new fit for purpose report and including other elements of evidence.
CiteScore measures the citation impact of titles in Scopus. The improved CiteScore methodology counts the citations received over four years – e.g., 2017-2020 to articles, reviews, conference papers, book chapters and data papers published in 2017-2020, and divides this by the number of publications published in 2017-2020.
Category Normalised Citation Impact shows how a paper or group of papers performs relative to averages or baselines for its category. This metric allows the fair comparison of institutions of different size or multidisciplinary document sets. It is a metric based on Clarivate-data. See Incites Indicators Handbook.
# of items published by an individual or group of individuals. A researcher using document count should also provide a list of document titles with links. If authors use an ORCID, they can draw on numerous sources for document count including Scopus, ResearcherID, CrossRef and PubMed.
Field-Weighted Citation Impact is the ratio of the total citations received by the author (denominator’s output), and the total citations that would be expected based on the average of the subject field. E.g., an output or group of outputs with a FWCI of 2.11 have performed 111% above expected world average for a group of outputs of similar type, age and within the same field. It is a metric based on Elsevier-data. See Research Metrics Guidebook.
Incorporates both productivity (number of publications) and citation impact (number of citations to each of the publications) into one author metric, the H-index. (Can also be calculated for journals). The h-index can be manually calculated by listing the author's articles next to their citation counts in decreasing order. If the author has created 100 papers, each of which has 100 citations, the author then have a h-index of 100. Accurate determination of h requires a complete bibliography and its value is significantly affected by length of career and field. When used for individuals, it tends to correlate closely to the age of the researcher, the length of career, including career breaks and can be heavily influenced by historic publishing patterns. It is also heavily dependent on the field, as some fields tend to produce more papers and citations than others. Unless the data is absolutely accurate, it may be safest to avoid using it.
The h5-index, a metric based on Elsevier-data, uses a 5-year publication and citation window on the standard h-index calculation and can be used to fairly track the metric over time in the SciVal (Elsevier) Benchmarking-module. The h5-index in SciVal Overview-module for an entity is always using publication and citation information from the last 5 complete years. For example in January 2020 the h5-index uses the date range 2014 – 2018, as 2019 is not yet considered a complete year until around June 2020.
A Google Scholar Citations metric based on the number of articles published by an author that have been cited at least ten times each. Generally evens out in situations where authors might often be disadvantaged such as during the early career period.
An influence of one or more authors on another, on a research field, on society, on the economy, etc. - that can be tracked or measured.
Used in the field of statistics as a synonym for variable. Indicators are the measurable states which allow the assessment of whether, or not, the associated criteria are being met.
A standard of measurement.
Outputs in top %
Outputs in top percentiles demonstrates the extent to which a research entity's (author, group, or institution) documents are present in the most-cited percentiles of a data universe. Found within SciVal, outputs in top percentiles can be field weighted. It indicates how many articles are in the top 1%, 5%, 10%, or 25% of the most cited documents.
Other databases calls this citation counts. Indicates the # of citations accrued since publication - As with all citation-based measures, it is important to be aware of citation practices. "Effective strategies for increasing citation frequency" lists 33 different ways to increase citations.