Reflections on the 1:AM altmetrics conference

Sunday, October 12th, 2014

I recently attended 1:AM, the first altmetrics conference, and I am still considering what I learnt from the various perspectives presented by publishers, funders, policy makers, librarians, and researchers. One impression that I came away with strongly is that the use of altmetrics as a proxy indicator of research impact is neither straightforward nor accepted, but that’s not to suggest that anyone thought it would be. Euan Adie, founder of Altmetric.com, one of the main companies exploring altmetrics and how they relate to research impact summed it up thus:

‘impact’ means different things to a publisher than to a funder, and the end goals for altmetrics in general vary from user to user

For me, the impact of research is as much about reach as it is about the influence or change it brings about. Traditionally we researchers tended to only think in terms of other researchers as the target of our reach, and of course the best way to measure that was citations. But as funders that rely primarily on tax payers money increasingly ask for evidence of “demonstrable contribution that excellent research makes to society and the economy” through their pathways to impact, then reaching an academic audience only is insufficient. This has been reinforced by the inclusion of impact case studies in Research Excellence Framework 2014. This isn’t a bad thing, and as a tax payer myself I’d quite like to know how my money is being spent. The challenge of course in terms of research impact is how to measure it.

The altmetrics manifesto, written back in 2010, makes three bold assertions:

  • Peer review is unaccountable
  • Citation metrics are too narrow and ignore context
  • Journal impact factors can be easily gamed and incorrectly measure the impact of individual articles

In order to counter the slow, unaccountable, misleading, and some might say broken metrics surrounding research, new metrics are required. Altmetrics respond to the sharing of “raw science” like datasets, code, and experimental designs, “nanopublication,” self-publishing via blogging, microblogging, and comments or annotations on existing work. Altmetrics “expand our view of what impact looks like, but also of what’s making the impact.”

The response of the emerging altmetrics services to date has been to quantify some of these metrics, and the now familiar altmetric donut gives us a reassuring score, where presumably the bigger the number the better, and the better the impact. Or does it? A view put forward by many at the 1:AM conference is that useful as some of these approaches may be, a crude number is little better than what’s on offer by conventional metrics. Surely, it’s the context that matters. But how do you measure context with a number, and what do the numbers mean anyway? Is Twitter no less vulnerable to gaming than journal impact factors? We were repeatedly told at the conference that altmetrics are so much more than social media mentions, yet more often than not the discussion came down to mentions on Twitter. We still have a long way to go I think and the jury is still out on the evidence that altmetrics are useful. We shall probably have to wait until early 2015 when HEFCE publishes their independent review of the role of metrics in research assessment for an official view.

So in the meantime what is the researcher to make of all this? Here is my own short and incomplete list of observations I made attending the 1:AM conference:

  1. Research articles that are well cited often but not always have a positive altmetric number.
  2. Research articles that are media friendly, and most trivially have quirky or scatological titles have great altmetric scores, but not necessarily many academic citations.
  3. The points above only apply to research published in the last 3-4 years. Altmetric numbers don’t tend to be available for research published more than a few years ago.
  4. Currently altmetric numbers don’t tell us much if anything about context.
  5. It is unclear whether actively engaging with social media will increase the impact of some given research.
  6. Nobody yet knows what research impact as measured by altmetrics  means.
  7. There’s probably something important about altmetrics, but it’s not yet clear what it is.

To address these rhetorical questions, I refer you, gently ready, back to the altmetrics manifesto:

Researchers must ask if altmetrics really reflect impact, or just empty buzz. Work should correlate between altmetrics and existing measures, predict citations from altmetrics, and compare altmetrics with expert evaluation.

For now though, it’s the word of caution offered by Jeremy Farrar, Director of the Wellcome Trust, who opened the 1:AM conference that struck me most, and will be the main message I take back to our research strategy group. While Farrar has a vision for the Wellcome Trust playing a role in the emerging altmetrics field, he warned the conference not to further burden an already overburdened research community by yet another approach to assessing impact that might destroy the very creativity and innovation that it sets out to measure. I couldn’t agree more. Now, ‘like’ if you agree too.

Read the rest here:
Reflections on the 1:AM altmetrics conference



Institutional VLEs, why bother?

Wednesday, September 17th, 2014

As part of a current debate on the role of the LMS and the VLE in an agenda of openness, Amber suggests that VLEs can be many things but they are not fundamentally evil:

“VLEs can be used as a platform for fantastic blended and online learning, but even if they are not used to that extent, they are still important.”

The comment I left in response was based upon a consideration that while universities are in the business of education, where students pay a considerable fee to attend a course, there is inevitably going to be a differentiation between what they receive and what someone who doesn’t pay a fee receives. This is actively being played out in many institutions as part of an exploration of pedagogy and platforms for open courses, especially MOOCs,  vs fees-based accredited courses. Usually these are different. For example, platforms tend to be more social to support large communities of dispersed learners in a MOOC, and pedagogies tend to favour tutor-based support for fees-based accredited courses compared with peer-support in massive open courses.

In exchange for the fee that students pay to attend courses at university, currently £9,000 a year in England, they might reasonably expect a consistent standard of experience across modules in their course. I think institutional VLEs should play an important role in that by providing a minimum module standard of content, support, and activities that students can expect. For some teachers however, that in itself can be a challenge to their practice given competing priorities forced upon most academics. Furthermore, not every teacher is an innovator – should they be? – so it’s inevitable that different teachers are going to provide a different experience, some better than others. Nonetheless minimum standards should be a goal expected by the institution for and on behalf of students. The VLE can certainly help with consistency through templates. But minimum standard is just that, a minimum. The maximum need not be described or prescribed. I’ve yet to see a VLE that stops a teacher from being innovative should they wish to be.

View original here:
Institutional VLEs, why bother?



Happy birthday Opportunity

Saturday, January 25th, 2014

NASA Opportunity rover selfie

Opportunity rover selfie. Image Credit: NASA/JPL-Caltech/Cornell Univ./Arizona State Univ.

Ten years ago today I wrote a short blog piece to note the landing of the NASA Opportunity rover on the surface of Mars. With the Spirit rover already on Mars I wrote “this is going to be an exciting next few weeks”. The mission was planned to last around 3 month. Well a decade later and Opportunity has outlived Spirit by around 30 months and is still working and generating useful data. During its time on Mars Opportunity has driven 39km and taken 187,000 images, including this selfie a few days ago. So sit back and watch some of the highlights of this incredible engineering and science project.

Read the original post:
Happy birthday Opportunity



Proliferation of researcher profiles

Saturday, January 4th, 2014

As a research active academic I publish papers and engage in other research activities that hopefully have some impact. Just what that impact is and how to measure it will be the subject of a later post – protip it’s altmetrics.

The first challenge however is to assemble a list of all my research outputs. Straightforward you say? Well perhaps, but precisely what is classed as a research output depends somewhat on the field you are in. For many of us the journal article is the most obvious output and therefore compiling a list of journal articles I’ve published has been my focus recently.

Actually I started thinking about this 6 years ago when I wrote about publicationlist.org. That was and remains a great site, simple to use and looks neat, but it requires some effort to gather together all your papers. A problem associated with this is just who am I, at least who am I in the research literature? I have appeared in print variously named as, ‘Davies D’, ‘Davies DA’, ‘Davies David’, ‘David Davies’, ‘D A Davies’, ‘D Davies’, and probably other combinations involving the different institutions I’ve worked at. They’re all me of course, but to a database they’re different people unless they can all be associated with a unique ID, the unique me.

Enter Open Researcher and Contributor ID (ORCID), Scopus Author ID, and ResearcherID, three initiatives aiming to uniquely identify each researcher from their fragmented publication profiles. Scopus Author ID and ResearcherID are backed by two of the biggest academic publishers, Elsevier and Thompson Reuters respectively, but ORCID is especially interesting as it’s an open, non-profit, and community-based effort. Thankfully all three systems talk to each other so you can link your Scopus and ResearcherID to your ORCID. And that’s what I’ve been doing over the holiday. I think I have now assembled a definitive list of my published outputs.

There are differences between the three schemes. ORCID is the simplest and just presents a list of outputs plus associated publication metadata. Useful for establishing my researcher profile on the web, but limited in functionality. ResearcherID is the most comprehensive because it uses Thompson Reuters’ Web of Knowledge and Web of Science to find not only peer reviewed journal articles but also conference proceedings, published poster abstracts and other works. Certainly more attractive for early careers researchers who have presented publishable work at conferences but have yet to build up an extensive journal profile. ResearcherID also has some high level citation metrics. Scopus however is likely to be the profile your institution is most interested in because it includes detailed citation metrics and analytics. It is also very useful for finding out who cites your work, so that you get a good idea of the active researchers in your field, as well as one measure of the impact of your work.

There are other differences that will become apparent when trying to gauge the impact of your research, especially when considering other factors such as who is talking about your work via social media. In that respect ORCID seems to be the preferred unique ID, probably because it’s an open non-profit initiative. It also plays well with the small but increasing number of altmetrics sites such as ImpactStory, but more about that if/when I write about altmetrics. But for now you might want to consider creating and maintaining all three profiles.

So anyway, if you want to check out my own research outputs then my researcher profiles are:

My ORCID
My ResearcherID
My Scopus Author ID

I’ve also just started using ImpactStory so if you want to see what impact I’m apparently having then head over to my ImpactStory profile.

But wait, that’s not all. There are some other interesting researcher profile services around. These are less about establishing a unique researcher ID, but instead are extremely useful for building a researcher profile on the web and creating a professional social network around researchers. The service that most of my colleagues seem to be taking up is ResearchGate. It’s very easy to use and looks slick. Unfortunately it doesn’t yet use any of the researcher IDs so there’s still a relatively long-winded method for finding all your papers, unless you import them as BibTex, EndNote or other equivalent format from ResearcherID for example.

Here’s my ResearchGate profile. Using the social networking features you can ‘follow’ me in a similar way to following people on Twitter.

If there are other research profile schemes or researcher networks that you find useful please mention them in the comments section below.

See more here:
Proliferation of researcher profiles



Happy New Year! (again)

Wednesday, January 1st, 2014

Happy new year 2002A custom of this now nearly dormant weblog is a post at new year. The first such post that I could find was for 2002. Back then I was using Radio UserLand as my blogging platform. WordPress had yet to be invented. By 2002 I had already been blogging for a couple of years. One of the things that I regret now is that I wasn’t consistent in where I posted. I ran several blogs at the same time so a lot of what I posted back then was spread over several sites, a number of which were hosted, from which I never took a local copy of my data. A lesson to us all, but thank goodness for the WayBack machine. I now self-host my own WordPress installation.

Dave Winer, without whom there wouldn’t have been a UserLand community, and Aaron Swartz. Take a look at this screen grab of one of my early weblogs that allowed anyone to post to my site via email. Dave and Aaron both posting to my blog on the same day! Aaron was only 15 at the time and had been working on the RDF/XML media type. He went on to do so many great things before his untimely death almost exactly a year ago. Also posting that day was Scott Lofteness, someone else who has achieved great things. I was in good company.

So as another new year starts and I reflect back on this weblog I notice that there has been a shift away from publishing to my own personal blog, and instead I post to different sites depending on the context or the medium. I use Twitter, Facebook, Flickr, and several other services that have fractionated where all my stuff goes. I can’t say yet whether this is a good or bad thing because these sites make it easy to create great content and I still have control of my data. But I do miss those late evenings and early mornings of the early days of blogging. These days however I need my sleep.

More here:
Happy New Year! (again)