Reflections on the 1:AM altmetrics conference

Sunday, October 12th, 2014

I recently attended 1:AM, the first altmetrics conference, and I am still considering what I learnt from the various perspectives presented by publishers, funders, policy makers, librarians, and researchers. One impression that I came away with strongly is that the use of altmetrics as a proxy indicator of research impact is neither straightforward nor accepted, but that’s not to suggest that anyone thought it would be. Euan Adie, founder of Altmetric.com, one of the main companies exploring altmetrics and how they relate to research impact summed it up thus:

‘impact’ means different things to a publisher than to a funder, and the end goals for altmetrics in general vary from user to user

For me, the impact of research is as much about reach as it is about the influence or change it brings about. Traditionally we researchers tended to only think in terms of other researchers as the target of our reach, and of course the best way to measure that was citations. But as funders that rely primarily on tax payers money increasingly ask for evidence of “demonstrable contribution that excellent research makes to society and the economy” through their pathways to impact, then reaching an academic audience only is insufficient. This has been reinforced by the inclusion of impact case studies in Research Excellence Framework 2014. This isn’t a bad thing, and as a tax payer myself I’d quite like to know how my money is being spent. The challenge of course in terms of research impact is how to measure it.

The altmetrics manifesto, written back in 2010, makes three bold assertions:

  • Peer review is unaccountable
  • Citation metrics are too narrow and ignore context
  • Journal impact factors can be easily gamed and incorrectly measure the impact of individual articles

In order to counter the slow, unaccountable, misleading, and some might say broken metrics surrounding research, new metrics are required. Altmetrics respond to the sharing of “raw science” like datasets, code, and experimental designs, “nanopublication,” self-publishing via blogging, microblogging, and comments or annotations on existing work. Altmetrics “expand our view of what impact looks like, but also of what’s making the impact.”

The response of the emerging altmetrics services to date has been to quantify some of these metrics, and the now familiar altmetric donut gives us a reassuring score, where presumably the bigger the number the better, and the better the impact. Or does it? A view put forward by many at the 1:AM conference is that useful as some of these approaches may be, a crude number is little better than what’s on offer by conventional metrics. Surely, it’s the context that matters. But how do you measure context with a number, and what do the numbers mean anyway? Is Twitter no less vulnerable to gaming than journal impact factors? We were repeatedly told at the conference that altmetrics are so much more than social media mentions, yet more often than not the discussion came down to mentions on Twitter. We still have a long way to go I think and the jury is still out on the evidence that altmetrics are useful. We shall probably have to wait until early 2015 when HEFCE publishes their independent review of the role of metrics in research assessment for an official view.

So in the meantime what is the researcher to make of all this? Here is my own short and incomplete list of observations I made attending the 1:AM conference:

  1. Research articles that are well cited often but not always have a positive altmetric number.
  2. Research articles that are media friendly, and most trivially have quirky or scatological titles have great altmetric scores, but not necessarily many academic citations.
  3. The points above only apply to research published in the last 3-4 years. Altmetric numbers don’t tend to be available for research published more than a few years ago.
  4. Currently altmetric numbers don’t tell us much if anything about context.
  5. It is unclear whether actively engaging with social media will increase the impact of some given research.
  6. Nobody yet knows what research impact as measured by altmetrics  means.
  7. There’s probably something important about altmetrics, but it’s not yet clear what it is.

To address these rhetorical questions, I refer you, gently ready, back to the altmetrics manifesto:

Researchers must ask if altmetrics really reflect impact, or just empty buzz. Work should correlate between altmetrics and existing measures, predict citations from altmetrics, and compare altmetrics with expert evaluation.

For now though, it’s the word of caution offered by Jeremy Farrar, Director of the Wellcome Trust, who opened the 1:AM conference that struck me most, and will be the main message I take back to our research strategy group. While Farrar has a vision for the Wellcome Trust playing a role in the emerging altmetrics field, he warned the conference not to further burden an already overburdened research community by yet another approach to assessing impact that might destroy the very creativity and innovation that it sets out to measure. I couldn’t agree more. Now, ‘like’ if you agree too.

Read the rest here:
Reflections on the 1:AM altmetrics conference



Institutional VLEs, why bother?

Wednesday, September 17th, 2014

As part of a current debate on the role of the LMS and the VLE in an agenda of openness, Amber suggests that VLEs can be many things but they are not fundamentally evil:

“VLEs can be used as a platform for fantastic blended and online learning, but even if they are not used to that extent, they are still important.”

The comment I left in response was based upon a consideration that while universities are in the business of education, where students pay a considerable fee to attend a course, there is inevitably going to be a differentiation between what they receive and what someone who doesn’t pay a fee receives. This is actively being played out in many institutions as part of an exploration of pedagogy and platforms for open courses, especially MOOCs,  vs fees-based accredited courses. Usually these are different. For example, platforms tend to be more social to support large communities of dispersed learners in a MOOC, and pedagogies tend to favour tutor-based support for fees-based accredited courses compared with peer-support in massive open courses.

In exchange for the fee that students pay to attend courses at university, currently £9,000 a year in England, they might reasonably expect a consistent standard of experience across modules in their course. I think institutional VLEs should play an important role in that by providing a minimum module standard of content, support, and activities that students can expect. For some teachers however, that in itself can be a challenge to their practice given competing priorities forced upon most academics. Furthermore, not every teacher is an innovator – should they be? – so it’s inevitable that different teachers are going to provide a different experience, some better than others. Nonetheless minimum standards should be a goal expected by the institution for and on behalf of students. The VLE can certainly help with consistency through templates. But minimum standard is just that, a minimum. The maximum need not be described or prescribed. I’ve yet to see a VLE that stops a teacher from being innovative should they wish to be.

View original here:
Institutional VLEs, why bother?



Conference Twitter overwhelm, is it just me?

Sunday, September 14th, 2014

Social Media Information Overload
Social Media Information Overload by Mark Smiciklas, on Flickr

I was at a conference recently that was actively promoting the use of social media including Twitter. Most conferences do these days it seems. It was a good opportunity to share thoughts and experiences with other participants and to engage with an audience not attending the conference itself by tweeting for example using the conference hashtag. Indeed there were folk back home that appeared to be tracking what they were missing by following conference session tweets, and in some cases there seemed to be meaningful interaction between conference participants and those listening in, which broadens what it means to be a conference participant these days in that you no longer need to be present to join in with conference delegates.

For me, however, I have a confession. I felt totally overwhelmed by the volume of information that was flowing through my social media channels, Twitter in particular. It was partly my fault for keeping my devices, an iPad and iPhone as it happens, always open during sessions rather than just listening to what was being presented. But also because I totally failed at finding any kind of balance between what was going on at the podium, and what was going on online. The volume of stuff that was being posted was impossible to keep up with, so I didn’t even try in the end. However that created another problem for me, digital eavesdropping. By not being able to follow everything that was posted I ended up felling like an outsider at someone else’s party. I was that person on the periphery of a circle of friends clearly having a good time, but not actually contributing. That is apart from the occasional comment or interjection that invariably gets ignored.

I enjoyed the conference but left feeling that I had actually missed a vital part of it, as others were saying how useful the online engagement was. How did they manage to participate in person and online in any meaningful way? Was I, am I, missing some important new skill for the new extended conference experience, and should I be worried? It’s the last question that’s been troubling me most, and regresses me to by late teenage years when I felt a mild form of social anxiety at potentially missing all the best parties. I’m sure that probably tells you more about me than it does at the use of social media at conferences. But I do wonder.

Anyway, wondering how people manage to integrate the tsunami of tweets (I actually referred to the tsunami of twits in my only conference Twitter contribution), what are your personal strategies for using social media at conferences?

Read more:
Conference Twitter overwhelm, is it just me?



Happy birthday Opportunity

Saturday, January 25th, 2014

NASA Opportunity rover selfie

Opportunity rover selfie. Image Credit: NASA/JPL-Caltech/Cornell Univ./Arizona State Univ.

Ten years ago today I wrote a short blog piece to note the landing of the NASA Opportunity rover on the surface of Mars. With the Spirit rover already on Mars I wrote “this is going to be an exciting next few weeks”. The mission was planned to last around 3 month. Well a decade later and Opportunity has outlived Spirit by around 30 months and is still working and generating useful data. During its time on Mars Opportunity has driven 39km and taken 187,000 images, including this selfie a few days ago. So sit back and watch some of the highlights of this incredible engineering and science project.

Read the original post:
Happy birthday Opportunity



Proliferation of researcher profiles

Saturday, January 4th, 2014

As a research active academic I publish papers and engage in other research activities that hopefully have some impact. Just what that impact is and how to measure it will be the subject of a later post – protip it’s altmetrics.

The first challenge however is to assemble a list of all my research outputs. Straightforward you say? Well perhaps, but precisely what is classed as a research output depends somewhat on the field you are in. For many of us the journal article is the most obvious output and therefore compiling a list of journal articles I’ve published has been my focus recently.

Actually I started thinking about this 6 years ago when I wrote about publicationlist.org. That was and remains a great site, simple to use and looks neat, but it requires some effort to gather together all your papers. A problem associated with this is just who am I, at least who am I in the research literature? I have appeared in print variously named as, ‘Davies D’, ‘Davies DA’, ‘Davies David’, ‘David Davies’, ‘D A Davies’, ‘D Davies’, and probably other combinations involving the different institutions I’ve worked at. They’re all me of course, but to a database they’re different people unless they can all be associated with a unique ID, the unique me.

Enter Open Researcher and Contributor ID (ORCID), Scopus Author ID, and ResearcherID, three initiatives aiming to uniquely identify each researcher from their fragmented publication profiles. Scopus Author ID and ResearcherID are backed by two of the biggest academic publishers, Elsevier and Thompson Reuters respectively, but ORCID is especially interesting as it’s an open, non-profit, and community-based effort. Thankfully all three systems talk to each other so you can link your Scopus and ResearcherID to your ORCID. And that’s what I’ve been doing over the holiday. I think I have now assembled a definitive list of my published outputs.

There are differences between the three schemes. ORCID is the simplest and just presents a list of outputs plus associated publication metadata. Useful for establishing my researcher profile on the web, but limited in functionality. ResearcherID is the most comprehensive because it uses Thompson Reuters’ Web of Knowledge and Web of Science to find not only peer reviewed journal articles but also conference proceedings, published poster abstracts and other works. Certainly more attractive for early careers researchers who have presented publishable work at conferences but have yet to build up an extensive journal profile. ResearcherID also has some high level citation metrics. Scopus however is likely to be the profile your institution is most interested in because it includes detailed citation metrics and analytics. It is also very useful for finding out who cites your work, so that you get a good idea of the active researchers in your field, as well as one measure of the impact of your work.

There are other differences that will become apparent when trying to gauge the impact of your research, especially when considering other factors such as who is talking about your work via social media. In that respect ORCID seems to be the preferred unique ID, probably because it’s an open non-profit initiative. It also plays well with the small but increasing number of altmetrics sites such as ImpactStory, but more about that if/when I write about altmetrics. But for now you might want to consider creating and maintaining all three profiles.

So anyway, if you want to check out my own research outputs then my researcher profiles are:

My ORCID
My ResearcherID
My Scopus Author ID

I’ve also just started using ImpactStory so if you want to see what impact I’m apparently having then head over to my ImpactStory profile.

But wait, that’s not all. There are some other interesting researcher profile services around. These are less about establishing a unique researcher ID, but instead are extremely useful for building a researcher profile on the web and creating a professional social network around researchers. The service that most of my colleagues seem to be taking up is ResearchGate. It’s very easy to use and looks slick. Unfortunately it doesn’t yet use any of the researcher IDs so there’s still a relatively long-winded method for finding all your papers, unless you import them as BibTex, EndNote or other equivalent format from ResearcherID for example.

Here’s my ResearchGate profile. Using the social networking features you can ‘follow’ me in a similar way to following people on Twitter.

If there are other research profile schemes or researcher networks that you find useful please mention them in the comments section below.

See more here:
Proliferation of researcher profiles