29 September 2017

My K-index


Matthew Hahn reminded me of a paper that proposed the “Kardashian index” for scientists. I don’t like the paper. It demeans outreach by implying that a high score (author Hall suggested more than 5) is an attention seeker who contributes little and should get back to the lab. There’s some pretty crummy sexist overtones in the choice of “Kardashian” as a descriptor. There was much criticism of this paper when came out, which I’m not going to rehash here.

But I didn’t know there was an online calculator. (Hat tip to Genome Biology.) And yes, I’m curious enough to enter in my numbers. My score was 17.46; the highest of anyone in Hahn’s Twitter thread by quite a ways.

I like trying to do outreach, and I know I do niche research. So this is not surprising to me.

If you love science, love methods sections


Mensh and Kording (2017) have a new paper on scientific writing. It’s very good. I agree with most of their advice. But not this.

You should also allocate your time according to the importance of each section. The title, abstract, and figures are viewed by far more people than the rest of the paper, and the methods section is read least of all. Budget accordingly.

No. Do not skimp time spent on your methods sections.

I get where this advice is coming from. It’s the same sentiment that has lead some journals to put their methods section at the end, or to stuff parts of papers away in online “supplemental information.”

But we read papers for lots of different reasons. I read lots of papers that are only tangentially related to me out of curiosity. But when there is a paper that is in my field, that I need to understand, I dig deep into those methods sections.

I’ve run into so many cases where something that looked like a solid finding looked very shaky once you realized how the data were collected. While Mensh and Kording are right that few people read the methods, it neglects that those who do are going to be the most intense and critical readers.

A recent feature in Nature showed that weak detailing of methods was leading to irreproducible results (my emphasis).

In one particularly painful teleconference, we spent an hour debating the proper procedure for picking up worms and placing them on new agar plates. Some batches of worms lived a full day longer with gentler technicians. Because a worm’s lifespan is only about 20 days, this is a big deal. Hundreds of e-mails and many teleconferences later, we converged on a technique but still had a stupendous three-day difference in lifespan between labs. The problem, it turned out, was notation — one lab determined age on the basis of when an egg hatched, others on when it was laid.

The article give multiple examples of how hard it is to standardize methodologies, but how important it is to achieving consistent results. This older Drugmonkey post, makes a similar point.

The methods section is where the rubber meets the road in terms of actually conducting science. If you don’t get that methods section right, you’re wasting the time of people who come afterwards.

References

Mensh B, Kording K. 2017. Ten simple rules for structuring papers. PLoS Computational Biology 13(9): e1005619. https://doi.org/10.1371/journal.pcbi.1005619

Lithgow GJ, Driscoll M, Phillips P. 2017. A long journey to reproducible results. Nature 548: 387–388. https://doi.org/10.1038/548387a

External links

The most replicated finding in drug abuse science

28 September 2017

Paying out of pocket


Anne Madden asked:

Academic scientists, how many of you have contributed significant out of pocket funds (or fam. $) to make your science happen?

 My newest paper cost me, personally, at least $5,919.21.

Every month for five years, I drove from Edinburg to Beach Access Road #6 on South Padre Island. Google Maps says that 92.2 miles, so that 184.4 miles round trip. Five years is 60 months, and the going rate for mileage reimbursement in Texas is $0.535.

92.2 miles × 2 × 5 × 12 × $0.535 / mile = $5,919.21.

And I know there are months I went more than once, so that is a conservative estimate. I also ate lunch on every one of those collecting trips. So maybe another $600 on top of that.

That project also involved family money, because my mom bought a new shovel when I was on the beach collecting and the one I was using broke.

It’s probably good if I don’t do this calculation very often.

I also paid out of pocket for this year’s American Society for Parasitologists meeting in San Antonio. The meeting was practically in my backyard (only a four hour drive; that’s close in Texas), so was relatively cheap (a drive to San Antonio is much less than a plane ticket). It was close to the end of the fiscal year (ours starts 1 September), and there is rarely travel money in the budget by then. Plus, there’s just less paperwork.

That said, I know I have it better, and I have reached a point of financial security where I can “opt out” of dealing with the torture that is university purchasing and reimbursement. Others are not so lucky. Here’s Holly Bik (lightly edited):

Serious proposal: If we want more minorities / first generation students to stay in science, we need to fix the travel reimbursement pyramid scheme.

I just found out that my university, University of California Riverside, can only prepay my conference registrations with a paper form (and it takes 3 weeks). Admins can’t pay anything travel-related with credit cards (they don’t have them). Seriously. It’s 2017. Everything is online.

I’m a first generation college student with $150,000 in student loan debt. And now UC Riverside wants me to pay more than $5,000 out of pocket for my work-related travel. For this summer’s conferences, I’ve probably paid more than $200 in credit card interest while waiting for reimbursements (money I don’t get back). Travel reimbursements varies across universities – some are pretty good. But UC Riverside is probably one of the worst I’ve experienced to date.

Sometimes there are the “perfect storms” of conference, workshop, etc., travel invites. And these are so important for early career people. So if you have money, you can travel for work (and suck up out of pocket expenses). And you meet people, build a network, have successful career. But if you are saddled with debt, you may forgo important opportunities because you just can’t eat up those travel costs. Your career suffers.

The biggest irony is that UC Riverside is proud of us first generation grads and faculty, but institutional bureaucracy works against us in horrible ways.

Institutional purchasing is terrible. And sometimes I think it’s bad on purpose, to drive people like me into paying for thing myself that the university should pay for, just because I don’t want to deal with the perpetual hassle and headaches of trying to fill out forms and get reimbursed.

Update, 29 September 2017:  Tara Smith contributes to the discussion.

From Bik’s thread, some places seem to be able to front costs–why can’t that be universal? It seems like a small thing when you have money, but for many struggling academics it’s the difference between “making it” and leaving the field.

Related posts

Indie spirit
Who paid for my open access articles?

External links

The high cost of academic reimbursement

27 September 2017

Scientific societies need to compete on services


There’s an editorial in the journal mBio asking members to publish in society journals.

The editorial contain some nice data outlining the changes in the publishing scene. But the argument the editorial advances sound like special pleading.

Just as the prefixes “Nature” and “Cell” seem to bring gravitas to many journal titles for some scientists and represent implied signals for authors and readers about the quality of the papers that they publish, journals published by professional scientific societies should carry the same authority. After all, they have a long tradition of authoritative leadership and management and are edited by some of the most accomplished scientists in their fields. Professional societies provide legitimacy to the journals they publish. When an author submits a paper to a scientific society journal, or when someone reads a paper published in a scientific society journal, they can be assured that the journal is legitimate and has a decades-long track record of quality.

This paragraph may be right that journals from societies “should” have authority. But you cannot assert authority or credibility. Credibility is determined by what other people say or do. The editors should be asking themselves the hard question of why they lost that credibility.

For instance, my reaction is similar to one I had for Evolution Letters. Let’s do a price check. It costs at least twice as much publish in the society journal mBio ($2,250-$3,000) as PeerJ ($1,095). It’s not clear to me what that I get shelling out all that extra cash.

The editorial tries to claim societies offer superior editorial services.

Scientific society journals are managed and edited by scientists actively working in the fields covered by the journals. ... Although there are certain challenges in using academic editors (12), they bring the experience, expertise, and authority that enables professional societies to refine their missions and set the standards of their fields as they evolve.


This one is tough, because all I can say is that I have published with many journals, and I have not seen consistent differences in the editorial services between those run by scientific societies and those that are not. A Twitter poll suggests society journals may have a very slight edge.

The editorial ends with a very generic one that publishing in a society journal “helps the society.” That may be, but people will only stick with sub-par services out of sentiment for so long.

Society journals are carriage makers that see the Model Ts on the road. And they don’t know how to adapt. Their only argument is, “But the horse is a noble animal. You like horses, right?” If scientific societies don’t figure this out quickly, they will be relegated to a tiny little niche in the academic publishing industry. 

Update, 26 January 2023: Another editorial extols the benefits of publishing in society journals. As usual, the argument is that profits from the journal support the society as well as going to the for-profit publisher (Wiley, in this case). The journal’s article processing charges for open access (if you choose to go that route) are US$3,910, more than double the current article processing charge for PeerJ.

Related posts

The problem is scientists, not publishers


References

Schloss PD, Johnston M, Casadevall A. 2017. Support science by publishing in scientific society journals. mBio 8(5): e01633-17. https://doi.org/10.1128/mBio.01633-17

26 September 2017

Microsoft Academic: Second impressions


By happenstance, I thought of Microsoft’s imitation of Google Scholar, Microsoft Academic Research, yesterday. I reviewed it years ago, but hadn’t thought of it much since then. I wondered what happened to it.

Quite a lot, as it turned out. The original website was decommissioned, and the name was shorted to Microsoft Academic. Version 2 launched in July 2017. I thought it was worth a new look.

Sadly, the second look is not much more promising than the first one.

One of the changes is that you can create a profile of your papers. That could be good. I’ve found profiles in other similar sites to be kind of useful. Okay, you have to create an account. No problem, I do that all the time... and you hit the first oddity.

Weirdly, you must use another site to set up your account. You can’t just give an email address and pick a password, like pretty much every other website on the planet. You have to use Twitter, Google, Facebook, Microsoft, or LinkedIn.

I thought, “My institutional ID is handled by Microsoft, and I use that to log in to Office 365, so that should work.” Nope. So I logged in with Google.

I discovered a possible reason why Microsoft Academic won’t let me use my institutional email as I start building a profile. It asks for institution, and snootily insists, “An affiliation must be selected from the list of suggestions.” Except that, according to it, the University of Texas System consists of just five institutions, not fourteen. And my university was not one of those five.


Building out a profile was weird, too. I thought that this next gen scholarly database would support ORCID, so you could enter one number and have it gather all your publications. Nope.

Microsoft Academic seems to identify authors by some mystic combination of name, institution, and... something else. For example, it considers the Zen Faulkes who was at the University of Melbourne a different author than the Zen Faulkes at the University of Texas-Pan American.


So you have to go in and “claim” your papers from however many random ways Microsoft Academic has parsed your name. I have a very distinctive name, and my papers were split into something like ten different authors. I cannot imagine how many different ways publications might be split if you have a common name. Or if you changed your names.

For some of my new papers, I found them by searching the database for the title. I added them to my profile by clicking the paper’s hyperlink, clicking “Claim,” then going to another page and clicking “Submit claim” again. It seemed to be a lot of clicking.

The profile lists “top papers,” but the metric Microsoft Academic uses to determine “top” papers is not clear. It’s not citations, because the number of citations in my list of papers are: 11, 28, 17, 9, 16.

Maybe the profile has a few weak spots, but good coverage of the scientific literature might make it valuable. I searched for “Emerita benedicti” (on my mind since the publication of my newest paper last Friday). That gives 15 hits in Microsoft Academic, but over 2,300 in Google Scholar. But if I search for that exact combination of words in Google Scholar, I am still left with over 32 hits, more than double Microsoft Academic’s yield.

Microsoft makes much of Academic’s “semantic search,” so it may be that I will find it more useful as I try other, more complex searches, rather than simple things like looking for a simple species name.

The home page for all this provides a customized dashboard with a calendar of scientific conference, research news, recent publications, and recent citations of your papers (not visible in the screenshot below).


Google Scholar gives you a couple of alerts on its home page, a sparse approach that leave you with no doubt as to what its job is: Google Scholar is a search engine. It’s not quite clear what Microsoft Academic wants to be. The home page of Microsoft Academic feels more like it wants to be a science news feed; more the science section of Google News than Google Scholar.

Perhaps the kiss of death in all this is that practically everything on this site feels like it’s moving through molasses. It’s sloooooooow. I spent a lot of time looking at a screen like this, waiting for it to populate:



Even while writing this blog post, I got a “You do not have permission to view this directory or page” error message when I tried to go to the home page. Google Scholar feels like it’s using telepathy in comparison. (Update, 26 September 2017: A bot attack may have been responsible for this slow performance.)

I will keep trying Microsoft Academic for a while to see if I learn more. But this project is now at least six years old. And darn it, it still feels like a clunky beta version, just like it did back in 2011, not a modern 2.0 version of an academic search engine that it says it is.

Related posts

Microsoft Academic Research: First impressions

External links

Zen Faulkes’s  Microsoft Academic profile

24 September 2017

Paying to publish and Poynder

Richard Poynder and Michael Eisen got into it on Twitter over the weekend over open access publishing. Poynder wrote:


My view is that PLOS legitimised a deeply-flawed business model: pay-to-publish.

Hm. The problem is that many journals used “pay to publish” before PLOS journals came along. They were called “page charges.” You can still find many journals with page charges that are not open access. Cofactor has a list here.

These seem to indicate that asking scientists to bear some of the cost of publication is not inherently problematic. At least, I certainly don’t recall any serious discussion about them as deeply flawed. There probably should have been. But people accepted page charges as a normal, routine part of some corners of academic publishing. Saying PLOS legitimized that model is questionable.

PLOS ONE revolutionized academic publishing. But what was revolutionary was its editorial policy of not screening for “importance.” That lead to it publishing a lot of papers and generating a lot of money. It was through that combination that PLOS ONE paved the way for many imitators, including bad journals (documented in Stinging the Predators).

To me, the bigger problem is that “pay to publish” is very often equated – wrongly – with “open access.” The business model used to support publishing is not closely related to whether people can freely read the paper.

External links

Journals that charge authors (and not for open access publication)

22 September 2017

When two lines of research collide

It’s so nice to have two new papers drop in short succession! I had one come out in Journal of Coastal Research last week, and another paper drops today in PeerJ!

A couple of years ago, I posted this picture to try to explain who I ended with papers strewn across multiple research fields.


Little did I know then that a couple of those lines of research were to destined to collide:


This paper started, as several of my papers had, with an unplanned chance observation. I was working with a summer undergraduate student on a project related to my ongoing project to understand the basic biology of the sand crab crab Lepidopa benedicti (reviewed in yesterday’s post).

I looked under the microscope at a sand crab brain we were staining, and thought, “Hey, I recognize that!” It was a larval tapeworm. I’d coauthored two papers about how they infect the nervous system of local white shrimp (Carreon et al. 2011, Carreon and Faulkes 2014).

I had already co-authored published a paper on parasitic nematodes in sand crab (Joseph and Faulkes 2014). But when we did the initial screen for whether there were any parasites in this species, we missed the tapeworm larvae entirely! Even though we has spent a lot of time looking at them in shrimp, we did not notice them.

Once I recognized that there was this familiar parasite in sand crabs, it was off to the races. I knew how to visualize the parasite from the “tapeworm in shrimp” papers. I knew behaviour tests we could do from the “nematodes in sand crabs” paper. This project was, to me, very low hanging fruit that I was confident could yield a paper quite quickly.

But it became so much cooler than I ever expected as the data started rolling in. I had both sand crabs and mole crabs available, so I checked both for tapeworms. It became obvious quickly that the infection patterns in sand crabs and mole crabs were very different. I tweeted out a few graphs while I was collecting the data:


You don’t get differences that obvious that early that often. And it held up! And it was consistent with something else in my archive...

I had some unpublished data from the nematode project. My former student, Meera, had searched for those nematodes in mole crabs. We couldn’t find any. That result was okay for a conference poster at the 2014 parasitology meeting, but on its own was just an observation and probably not publishable.

But having two parasites show the same infection pattern in two species – one species heavily infected, the other one practically uninfected – now that was much more interesting.

The paper came together, as expected, pretty quickly. I submitted it to PeerJ. I’ve published with them before, and I was recently reminded how much I like their editorial process. They truly did build the better mousetrap. They are prompt but thorough. I still think PeerJ’s submission process for figures is still far more fiddly than it needs to be, even though I realize why it is that way.

I also wanted to milk my PeerJ lifetime membership more. I got it when it was $99 per author for life. With two papers, buying that membership when I did had probably save me thousands of dollars in article processing fees.

One thing that makes me happy about this pair of papers that has just come out (this and the phenology one) is that I genuinely feel that I have made progress in understanding the basic biology of these sand crabs. Yes, albuneid sand crabs are obscure little critters that few other people care about.

But a lot of papers feel like you’re mostly filling in details, or are variations on an established theme. It’s very satisfying to have a project where you genuinely feel you are shedding new light on topic. That’s why I kept doing the sand crab papers.

And I did have a student email me with a question about sand crabs not too long ago, so maybe these papers aren’t just to make me happy. Maybe some other people will find them cool and useful, too.


Related posts

Connections in my scientific career
Staying active in the lab and/or field when you’re the boss
823 days: A tale of parasite publication
Where’s the site for the parasite?
Tracking tiny worms

References

Carreon N, Faulkes Z. 2014. Position of larval tapeworms, Polypocephalus sp., in the ganglia of shrimp, Litopenaeus setiferus. Integrative and Comparative Biology 54(2): 143-148. https://doi.org/10.1093/icb/icu043

Carreon N, Faulkes Z, Fredensborg BL. 2011. Polypocephalus sp. infects the nervous system and increases activity of commercially harvested white shrimp (Litopenaeus setiferus). Journal of Parasitology 97(5): 755-759. https://doi.org/10.1645/GE-2749.1

Faulkes Z. 2017. Filtering out parasites: sand crabs (Lepidopa benedicti) are infected by more parasites than sympatric mole crabs (Emerita benedicti). PeerJ 5: e5832. https://doi.org/10.7717/peerj.3852

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. https://doi.org/10.1093/icb/icu064 

21 September 2017

Fiddly bits and increments


You have to be honest about your papers. I am happy with my latest paper, for several reasons.

  • It’s probably the longest I’ve ever collected data for a paper (five years). 
  • Part of it was crowdfunded. 
  • Part of it was first published in a tweet. 
  • It’s open access.

But I admit this paper is a little scrappy.

My newest sand crab paper continues a project that started because an REU student, Jessica Murph, wanted to do a field project. Jessica collected about a year’s worth of data from the field. I continued for a second year because I didn’t think it would be publishable with only one year of data. It took a long time (don’t get me started), but we got that paper published (Murph and Faulkes 2013).

But even after two years of data gave us a paper, I just kept going out to the field every month. I didn’t have any super strong reason to do so. I needed sand crabs for other projects (like Joseph and Faulkes 2014), but I didn’t need to keep records of size and sex and number per transect of animals I was bringing back to the lab. But I did anyway.

One cool thing that happened while I did so was that I found a new species for the areaLepidopa websteri – in 2012. That turned into a little paper of its own (Faulkes 2014). But a couple of years later, I found another specimen of this species. And then a third. While range extensions are an accepted thing in describing the distribution of a species, confirmations saying, “Yes, it’s still here” are not enough to publish a paper. Even when they are notoriously hard beasties to find.


Later, I found an orange sand crab. I’d co-authored a paper (Nasir and Faulkes 2011) saying that they were all grey or white, so that was a neat little wrinkle on the colour story. I found a second orange one when I was curating Real Scientists, and tweeted that out. Thus, a tweet was the first official “publication” of a new colour morph for Lepidopa benedicti! But I only had a couple of individuals, which was, again, not enough to publish a paper.

I did have a few ideas percolating in the back of my mind. I was interested in comparing the local sand crab population with the Atlantic population, and ran a successful crowdfunding campaign in 2012 to do so. (If you weren’t around for my crowdfunding campaigns, those were a lot of fun.)

I collected sand crabs in Florida, but the number of animals I found (three) was – againnot enough to hold up a paper on its own.

Are you seeing a pattern here yet?

Meanwhile, the basic data was slowly piling up and I was getting a sharper and sharper picture of what this population of sand crabs locally was doing month in, month out. Things that I thought were bad luck when I started (like, not finding any animals for months at a time) turned out to be part of a pretty predictable pattern. But that wasn’t a new finding; it was just a refinement of a pattern I’d published in the first paper (Murph and Faulkes 2013). An incremental improvement in understanding seasonal abundance was probably not enough for a paper.

The one finding that was genuinely new, and that made me think another paper was viable, was figuring out the reproductive cycle of the sand crabs. In the first two years of data (Murph and Faulkes 2013), we had no evidence of these animals reproducing at my field site at all. Now I know that while reproductive females are hard to find, they are there, I know when they appear (summer). And I know when the little ones appear (September / October).

That’s why I say this paper is a little scrappy. It includes a lot of fiddly bits and bobs that would not be enough to stand as independent papers. But I wanted to get them in the scientific record somehow. So I used one finding, the annual reproductive cycle, as a sort of tentpole to hold up a few others.

After experimenting with posting a preprint that contained a lot of these data, I settled down to the job of trying to find a real home for all this. I like to try to get papers in different journals, and I had been eyeing the Journal of Coastal Research. Some senior biology faculty at UTPA (Frank Judd and Bob Lonard) had published there multiple times. It was even more on my radar after attending the 2013 national conference of the ASPBA on South Padre Island.

The submission date on the paper says received 8 July 2016, but I hit “submit” in March. It was only through a haphazard “Hey, I wonder what’s the deal with my paper?” that I thought to log in to the journal’s manuscript review system, when I learned what was going on. The editor wanted me to fix a things in the manuscript to bring it in line with the journal’s formatting rules before it went out for review. But the submission system never generated an email to me from the editor saying, “Fix these.” Great. There’s a few months wasted.

But I do want to give the journal credit for things they did well. First, they did very intense copyediting, for which I am always grateful. There are always typos and errors and things that need fixing, and I never find them all on my own. And they drive me mad afterwards.

Second, Journal of Coastal Research is not known as an open access journal. There is no mention of open access publishing options in their (extensive) instructions to authors. But I asked about it during the copyediting and production stage, and was delighted to find that they did have an open access option. And the article processing fee was quite competitive.

I am glad to tell you the story of this sand crab paper, for I have another one to tell you about when it drops... tomorrow!

References

Faulkes Z. 2014. A new southern record for a sand crab, Lepidopa websteri Benedict, 1903 (Decapoda, Albuneidae). Crustaceana 87(7): 881-885. https://doi.org/10.1163/15685403-00003326

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. https://doi.org/10.1093/icb/icu064

Murph JH, Faulkes Z. 2013. Abundance and size of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae), in South Texas. The Southwestern Naturalist 58(4): 431-434. https://doi.org/10.1894/0038-4909-58.4.431

Nasir U, Faulkes Z. 2011. Color polymorphism of sand crabs, Lepidopa benedicti (Decapoda, Albuneidae). Journal of Crustacean Biology 31(2): 240-245. https://doi.org/10.1651/10-3356.1  

Related posts

Back to the start
1,017 days: when publishing the paper takes longer than the project
Way down south: stumbling across a sand crab (Lepidopa websteri)
Amazons and Goliaths, my #SciFund expedition blog, now available!
A pre-print experiment: will anyone notice?
 
External links

Are two years’ data better than one?

Metrics do not mean academia has been “hacked”

Portia Roelofs and Max Gallien argue that the high altmetric score for a dire article defending colonialism is evidence that academia has been “hacked.”

Academic articles are now evaluated according to essentially the same metrics as Buzzfeed posts and Instagram selfies.

The problem with their thesis is that article metrics are not new. They even discuss this:

Indexing platforms like Scopus, Web of Science and Google Scholar record how many other articles or books cite your article. The idea is that if a paper is good, it is worth talking about. The only thing is, citation rankings count positive and negative references equally.

I’m pretty sure that it has been literally decades since I read articles about using citation data as metrics for article impact. And one of the concerns raised then was about mindless counting of citation data. “But,” people would object, “if an article got a lot of citations because people were saying how bad it was?”

This is not a hypothetical scenario. Go into Google Scholar, and look at the number of citations for the retracted paper that tried to link vaccination to autism. 2,624 citations. Or the “arsenic life” paper, which has been discredited, though not retracted. 437 citations. By comparison, my most cited papers are in the low tens of citations.

The defence for using citations as metrics was that negative citations rarely happen. (I seem to recall seeing some empirical data backing that, but it’s been a long time.) But it was certainly much harder for people to dig down and find whether citations were positive or negative before so much of scientific publishing moved online. (Yes, I remember those days when journals were only on paper. ‘Cause I be old.)

Indeed, one of the advantages of the Altnetric applet is that it is trivial to go in and see what people are saying. Click on the recorded tweets, and you can see comments like, “So hard to read it without being outraged,” “Apparently not a parody: paper making 'the case for colonialism'. How does a respected journal allow this? Shocked.”and simply, “Seriously?!” Hard to find any tweets saying something like, “This is a thoughtful piece.”

It wouldn’t surprise me if the Altmetric folks are working on code that will pick out the valence of words in tweets about the paper; “excellent” versus “outraged,” say. Some papers are analyzing “mood” in tweets already (e.g., Suguwara and colleague 2017).

So the issue that Roelofs and Gallien are discussing is not a new kind of issue, although it could be new to the degree it is happening. But Roelofs and Gallien fail to show even a single example of the thing they claim to be such a danger: someone, anyone arguing that this is a good paper, a worthy piece of scholarship, because of its altmetric score.

It is fair to point out that any metric needs to be used thoughtfully. But I am not seeing any chilling effects, or even much potential for chilling effects, that they are so worried about.

A partisan news site can play to outrage and be rewarded because of their business models. They generate revenue by ads and clickthroughs. Academic journals are not like such news sites. They are not ad supported. They are not generating revenue by clicks and eyeballs. Academic journals exist in a reputation economy. They rely on reputation for article submissions, peer reviews, and editors.

For me, a bigger problem that journals might be rewarded from criticism by high altmetric scores is that journals can be so effectively isolated from criticism by publisher bundling (a.k.a “big deals”). It’s almost impossible for a library to cancel a subscription to one journal. (And yes, I know there are arguments for cancelling all subscriptions.)

External links

Clickbait and impact: how academia has been hacked

18 September 2017

A pre-print experiment, continued


Over a year ago, I uploaded a preprint into bioRxiv. When people upload preprints, bioRxiv sensible puts on a disclaimer that, “This article is a preprint and has not been peer-reviewed.”

A little over a week ago, the final, paginated version of the paper that arose from the preprint was published. Now, bioRxiv is supposed to update its notice automatically to say, “Now published in (journal name and DOI).”

Perhaps because the final paper was substantially different than the preprint – in particular, the title changed – bioRxiv didn’t catch it. I had to email bioRxiv’s moderators through the contact form asking them to make the update.

The preprint was making more work for me. Again. It wasn’t a lot of work, I admit, but people advocating preprints often talk about them as though they take effectively zero time. They don’t. You have to pay attention to them to ensure things are being done properly. I want people to cite the final paper when it’s available, not the preprint.

Some journals are talking about using bioRxiv as their submission platform. This would be a good step, because it would remove work duplication.

I’m glad I’ve been through the preprint experience. But I am still not sold on its benefits to me as a routine part of my workflow. It seems all the advantages that I might gain from preprints can be achieved by other methods, notably publishing in open access journals with a good history of good peer review and production time.

Related posts

A pre-print experiment: will anyone notice?

13 September 2017

A look back at the crystal ball

I wrote the bulk of this post this post five years ago, back in 2012. That week, a paper came out in Nature that claimed to predict... the future! At least, it claimed to predict one part of my academic future, namely, my h-index:


At the time the paper came out, there was an online calculator. It hasn’t succumbed to link rot: it’s still there! I entered in the following values then:

  • Current h-index: It was 8 in 2012 (according to Google Scholar).
  • Number of articles: 24 (I only counted my original technical articles).
  • Years since first article: 20 (then; my first paper was in 1992).
  • Number of distinct journals: 20.
  • Number in “top” journals (a.k.a. the glamour mags): 0.

The program predicted my h-index now, five years later, would be 13. Since I used my Google Scholar data, I went back and checked my Google Scholar profile.


How did the prediction fare? Zooming in...


Holy cow!


Perfect. The prediction was perfect.

It’s a bit spooky.

Now I’m having one of those existential crises of whether my fate is set and whether there is anything I can do about it. As Ahab said in Moby Dick:

Is it I, God, or who, that lifts this arm? But if the great sun move not of himself; but is as an errand-boy in heaven; nor one single star can revolve, but by some invisible power; how then can this one small heart beat; this one small brain think thoughts; unless God does that beating, does that thinking, does that living, and not I.

The 2012 prediction reaches ten years forward, predicting an h-index of 21 in 2022. Of course, my publishing profile has changed in five years. I entered my updated data, and experienced my second existential crisis of the day:


My predicted h-index for 2022 has gone down from five years ago! The new prediction drops my 2022 h-index by 3 points! Argh! It does kind of make you feel like you’re failing at science.

Next, to schedule a post with this graph for 2022. We’ll see how close it is.

Related posts

Gazing into the crystal ball of h-index
Academic astrology

11 September 2017

Chasing pidgeys


In the game Pokémon Go, pidgeys are pokémon that you see everywhere. They’re super common, super small. They are not very powerful. You’d be hard pressed to win any gym battle with them.

When I started playing the game, I quickly stopped collecting them because, well, I had them already. And they seemed useless.

But I was wrong. And now I chase after them all the time.

There are a lot of different resources in Pokémon Go, but one is experience. You “level up” as a player with a certain number of experience points. One of the ways to get experience points is to evolve pokémon, and you get quite a lot of experience for doing so. It turns out that pidgeys are cheap to evolve. A few other pokémon are just as cheap, but they are much less common, and harder to catch.


Thus, what looks like something trivial and boring turns out to be one of the most reliable ways to advance in the game.

It occurred to me that this is a good metaphor for careers, including academic careers. Much of your success comes from chasing pidgeys: the boring, mundane tasks that you have to do a lot of, and that earn little recognition individually. Grading assignments, getting reviews back to editors, going to meetings, consistently working on papers.

(This post inspired by a student in General Biology who asked me what level I was at in Pokémon Go and whether I’d caught Raikuo yet.)

Picture from here.

08 September 2017

The Voynich manuscript and academic writing


The Voynich manuscript is a potentially obsession creating item. It’s routinely described with phrases like, “the world’s most mysterious book.” For more than a century, nobody could read it or make heads nor tails about what it was about. Debate raged about whether it was coded or just an unreadable hoax.

Until recently.

The book has, apparently, finally yielded to insightful scholarship and has been decoded.

(I)t was more or less clear what the Voynich manuscript is: a reference book of selected remedies lifted from the standard treatises of the medieval period, an instruction manual for the health and wellbeing of the more well to do women in society, which was quite possibly tailored to a single individual.

But what I want to talk about is not the solution, but about writing style and communication.

Here we have a century old mystery, solved. Here’s how I learned about it. A tweet from Benito Cereno that read:

Holy shit holy shit holy shit holy shit

The Voynich manuscript, decoded

You can feel Benito’s excitement in that tweet! This is so exciting, there’s no time for punctuation marks!

Now read Nicholas Gibbs’s first hand account of solving this mystery. Here’s the opening paragraph, which does use a good narrative structure, the ABT (and, but therefore) template (championed by Randy Olson):

For medievalists or anyone with more than a passing interest, the most unusual element of the Voynich manuscript – Beinecke Ms. 408, known to many as “the most mysterious manuscript in the world” – is its handwritten text. (And) Although several of its symbols (especially the ligatures) are recognizable, adopted for the sake of economy by the medieval scribes, the words formed by its neatly grouped characters do not appear to correspond to any known language. (And) It was long believed that the text was a form of code – one which repeated attempts by crypt­o­graphers and linguists failed to penetrate. (And) As someone with long experience of interpreting the Latin inscriptions on classical monuments and the tombs and brasses in English parish churches, I recognized in the Voynich script tell-tale signs of an abbreviated Latin format. But interpretation of such abbreviations depends largely on the context in which they are used. (Therefore) I needed to understand the copious illustrations that accompany the text.

But even with that good narrative structure in place, the opening paragraph shows so many of the problems of this article. Like many academics, Gibbs overloads on facts, with “and, and, and...” before we get to the “but.”

It’s about as devoid of excitement as you can imagine. This is a very careful walk through of the process. To use another of Randy Olson’s ideas, the “four organs of communication” (pictured; more in Don’t Be Such a Scientist) this description is all head (intellect). There’s nothing from the heart (emotion) or gut (intuition, humour). No emotion, nothing personal.

It’s disappointing.

Gibbs completely bypasses the intensity of interest in the strange book, of how many people have tried to crack it. “Repeated attempts” is so weak to describe a century long set of efforts to crack this this. It is such a typically cautious, couched language that is used in academic writing all the time.

And having solved a problem that so many people have brought so much talent and effort to bear upon, you might expect Gibbs to describe opening a bottle of champagne in celebration. Or maybe a beer. Or a description of the satisfaction he had from his insights – the “Aha!” moments, as it were.
Instead, Gibbs treats it with about as much enthusiasm as a walk to from the living room couch to the bathroom. 

You want to hear about the feeling of triumph of solving the puzzle, not just the step by step solution to it.

If you want to connect with people, you need the passion. You need the guts. You need the emotions.

Update, 9 September 2017: I’m seeing tweets from people grumbling that the Voynich manuscript probably hasn’t been solved. Nobody that I’ve seen has said why they doubt that the problem is solved. (Update, 10 September 2017: Ah, see here.) Regardless, that doesn’t change the points made here.

Related posts

Connection: Hollywood Storytelling Meets Critical Thinking review
Review: Don’t Be Such a Scientist

External links

Voynich manuscript: the solution
So much for that Voynich manuscript “solution”

Picture from here.

02 September 2017

Thank you, New Hampshire


It’s been a week since Harvey changed everything for Houston, Texas.

And since then, I’ve been waiting. After Katrina hit new Orleans, my university (then The University of Texas Pan American) offered enrollment to students affected by the hurricane. Since Harvey was hitting Texas, I expected that and more.

I emailed our president’s office, reminding them of what happened back in 2005. I got an email back from our Office of Emergency Preparedness, saying:

(UTRGV) has been in communication with University of Texas System... since last week. There are system-wide plans in place in the event student relocation becomes necessary.

I waited to hear what those system-wide plans were. I waited all week. All that happened at my institution was that the Atheletics department teamed with a Texas grocery store to fundraise. Hardly an institution wide response or plan.

Finally, University of Texas System Chancellor William McRaven writes this, titled, “Texans stop for no storm.”

This annoys me to no end. It feels like McRaven is taking this moment to do say, “Look how tough we are,” posturing instead of actually offering concrete plans for help.

On Twitter, the UT System account tweeted a Storyify about how institutions were helping people affected by Harvey. And this is nice, but it’s things like student organizations doing fundraising, universities offering counseling services, not institutions offering anything like what a New Hampshire university has done.

Franklin Pierce University will provide free tuition, room and board to up to 20 students for the fall semester.

That’s what I was expecting UTRGV and other UT System universities to do. But no.

Thank you, Franklin Pierce University, for doing for Texas students what Texas universities didn’t.

Related posts

Credit where it’s due

External links

New Hampshire university to take in students after Harvey
Texans stop for no storm

Picture from here.