NISO launches New Standards development Projects in new forms of assessing impact & altmetrics. Begun in July 2013, the work of Phase 1 culminated in a White Paper released in Summer 2014.
Phase 2 of the project will be to develop standards or recommended practices i the prioritized areas of definitions, calculation methodologies, improvement of data quality and use of persistent identifies in alternative metrics. As part of each project, relevant use cases and how they apply to different stakeholder groups will be developed. The above link describes the projects approved for this phase.
With the increased interest and work surrounding the role of data - sharing, curating, mining, archiving; bibliometrics and altmetrics are more important today than ever.
Heather Piwowar's Research Remix is an example of the growing exploration of innovative approaches in scholarly metrics.
The increasing promotion of these issues at conferences and in the literature is illustrative of how higher education is now tracking anything having to do with research outputs.
Springer announced effective March 2014 that it has added altmetrics information to every article available on SpringerLink. The data is provided by Altmetric and this information is visible to all visitors to SpringerLink not just those with access to full-text articles. One can see where an article is being shared or discussed across the web. The number of shares for any given article will now be listed alongside citations and will redirect user to Altmetric.
How scholarly metrics will evolve is the central question, and that evolution will likely be influenced by the changing world of publishing, new forms of creative sharing, directions in the Open Access movement, and the development of standards to facilitate easy comparison and analysis.
Libraries and authors have been trying to determine the validity and accuracy of usage statistics.
Usage data was originally measuring shelving statistics of print volumes, and now in the digital environment can measure the usage of individual component parts (ie. the specific journal article within a journal title, a book chapter in a volume, etc.)
Newer based products are at the article level metric and are being explored by different producers.
The big picture is assumed by vendor-generated data, such as with COUNTER Online Metrics, which also provides UsageFactor, Publisher and Institutional Repository Usage Statistics (PIRUS2), and the embedding of PageRank into Google's metrics, and the Y-Factor, a morphing of impact factor and PageRank.
Co-citation tools offer visualizations and clusters between authors to determine relationships and how their scholarship impacts each other. The information science literature explains the significance of these networks and some software tools illustrate the formation of these clusters.
One of the best tables that compares many of the current traditional and alternative metrics is part of a LibGuide created at the University of Utrecht and is highly recommended.
One of the problems in measuring author impact is identifying all the author's work. Variant names and spellings can lead to missed or duplicate citations.
Efforts have been made to clearly identify authors, most notably in the attempt to establish unique author identifiers.
Many libraries are trying to draw readers to their institutional repositories and content deposited there is usually crawled by search engines. In addition to organizing content and elements retrieved from other traditional and secondary sources, it has become useful for researchers to organize and customize their retrieved references. New products are being created, tested and released that aid this effort.