22 September 2017

When two lines of research collide

It’s so nice to have two new papers drop in short succession! I had one come out in Journal of Coastal Research last week, and another paper drops today in PeerJ!

A couple of years ago, I posted this picture to try to explain who I ended with papers strewn across multiple research fields.

Little did I know then that a couple of those lines of research were to destined to collide:

This paper started, as several of my papers had, with an unplanned chance observation. I was working with a summer undergraduate student on a project related to my ongoing project to understand the basic biology of the sand crab crab Lepidopa benedicti (reviewed in yesterday’s post).

I looked under the microscope at a sand crab brain we were staining, and thought, “Hey, I recognize that!” It was a larval tapeworm. I’d coauthored two papers about how they infect the nervous system of local white shrimp (Carreon et al. 2011, Carreon and Faulkes 2014).

I had already co-authored published a paper on parasitic nematodes in sand crab (Joseph and Faulkes 2014). But when we did the initial screen for whether there were any parasites in this species, we missed the tapeworm larvae entirely! Even though we has spent a lot of time looking at them in shrimp, we did not notice them.

Once I recognized that there was this familiar parasite in sand crabs, it was off to the races. I knew how to visualize the parasite from the “tapeworm in shrimp” papers. I knew behaviour tests we could do from the “nematodes in sand crabs” paper. This project was, to me, very low hanging fruit that I was confident could yield a paper quite quickly.

But it became so much cooler than I ever expected as the data started rolling in. I had both sand crabs and mole crabs available, so I checked both for tapeworms. It became obvious quickly that the infection patterns in sand crabs and mole crabs were very different. I tweeted out a few graphs while I was collecting the data:

You don’t get differences that obvious that early that often. And it held up! And it was consistent with something else in my archive...

I had some unpublished data from the nematode project. My former student, Meera, had searched for those nematodes in mole crabs. We couldn’t find any. That result was okay for a conference poster at the 2014 parasitology meeting, but on its own was just an observation and probably not publishable.

But having two parasites show the same infection pattern in two species – one species heavily infected, the other one practically uninfected – now that was much more interesting.

The paper came together, as expected, pretty quickly. I submitted it to PeerJ. I’ve published with them before, and I was recently reminded how much I like their editorial process. They truly did build the better mousetrap. They are prompt but thorough. I still think PeerJ’s submission process for figures is still far more fiddly than it needs to be, even though I realize why it is that way.

I also wanted to milk my PeerJ lifetime membership more. I got it when it was $99 per author for life. With two papers, buying that membership when I did had probably save me thousands of dollars in article processing fees.

One thing that makes me happy about this pair of papers that has just come out (this and the phenology one) is that I genuinely feel that I have made progress in understanding the basic biology of these sand crabs. Yes, albuneid sand crabs are obscure little critters that few other people care about.

But a lot of papers feel like you’re mostly filling in details, or are variations on an established theme. It’s very satisfying to have a project where you genuinely feel you are shedding new light on topic. That’s why I kept doing the sand crab papers.

And I did have a student email me with a question about sand crabs not too long ago, so maybe these papers aren’t just to make me happy. Maybe some other people will find them cool and useful, too.

Related posts

Connections in my scientific career
Staying active in the lab and/or field when you’re the boss
823 days: A tale of parasite publication
Where’s the site for the parasite?
Tracking tiny worms


Carreon N, Faulkes Z. 2014. Position of larval tapeworms, Polypocephalus sp., in the ganglia of shrimp, Litopenaeus setiferus. Integrative and Comparative Biology 54(2): 143-148. https://doi.org/10.1093/icb/icu043

Carreon N, Faulkes Z, Fredensborg BL. 2011. Polypocephalus sp. infects the nervous system and increases activity of commercially harvested white shrimp (Litopenaeus setiferus). Journal of Parasitology 97(5): 755-759. https://doi.org/10.1645/GE-2749.1

Faulkes Z. 2017. Filtering out parasites: sand crabs (Lepidopa benedicti) are infected by more parasites than sympatric mole crabs (Emerita benedicti). PeerJ 5: e5832. https://doi.org/10.7717/peerj.3852

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. https://doi.org/10.1093/icb/icu064 

21 September 2017

Fiddly bits and increments

You have to be honest about your papers. I am happy with my latest paper, for several reasons.

  • It’s probably the longest I’ve ever collected data for a paper (five years). 
  • Part of it was crowdfunded. 
  • Part of it was first published in a tweet. 
  • It’s open access.

But I admit this paper is a little scrappy.

My newest sand crab paper continues a project that started because an REU student, Jessica Murph, wanted to do a field project. Jessica collected about a year’s worth of data from the field. I continued for a second year because I didn’t think it would be publishable with only one year of data. It took a long time (don’t get me started), but we got that paper published (Murph and Faulkes 2013).

But even after two years of data gave us a paper, I just kept going out to the field every month. I didn’t have any super strong reason to do so. I needed sand crabs for other projects (like Joseph and Faulkes 2014), but I didn’t need to keep records of size and sex and number per transect of animals I was bringing back to the lab. But I did anyway.

One cool thing that happened while I did so was that I found a new species for the areaLepidopa websteri – in 2012. That turned into a little paper of its own (Faulkes 2014). But a couple of years later, I found another specimen of this species. And then a third. While range extensions are an accepted thing in describing the distribution of a species, confirmations saying, “Yes, it’s still here” are not enough to publish a paper. Even when they are notoriously hard beasties to find.

Later, I found an orange sand crab. I’d co-authored a paper (Nasir and Faulkes 2011) saying that they were all grey or white, so that was a neat little wrinkle on the colour story. I found a second orange one when I was curating Real Scientists, and tweeted that out. Thus, a tweet was the first official “publication” of a new colour morph for Lepidopa benedicti! But I only had a couple of individuals, which was, again, not enough to publish a paper.

I did have a few ideas percolating in the back of my mind. I was interested in comparing the local sand crab population with the Atlantic population, and ran a successful crowdfunding campaign in 2012 to do so. (If you weren’t around for my crowdfunding campaigns, those were a lot of fun.)

I collected sand crabs in Florida, but the number of animals I found (three) was – againnot enough to hold up a paper on its own.

Are you seeing a pattern here yet?

Meanwhile, the basic data was slowly piling up and I was getting a sharper and sharper picture of what this population of sand crabs locally was doing month in, month out. Things that I thought were bad luck when I started (like, not finding any animals for months at a time) turned out to be part of a pretty predictable pattern. But that wasn’t a new finding; it was just a refinement of a pattern I’d published in the first paper (Murph and Faulkes 2013). An incremental improvement in understanding seasonal abundance was probably not enough for a paper.

The one finding that was genuinely new, and that made me think another paper was viable, was figuring out the reproductive cycle of the sand crabs. In the first two years of data (Murph and Faulkes 2013), we had no evidence of these animals reproducing at my field site at all. Now I know that while reproductive females are hard to find, they are there, I know when they appear (summer). And I know when the little ones appear (September / October).

That’s why I say this paper is a little scrappy. It includes a lot of fiddly bits and bobs that would not be enough to stand as independent papers. But I wanted to get them in the scientific record somehow. So I used one finding, the annual reproductive cycle, as a sort of tentpole to hold up a few others.

After experimenting with posting a preprint that contained a lot of these data, I settled down to the job of trying to find a real home for all this. I like to try to get papers in different journals, and I had been eyeing the Journal of Coastal Research. Some senior biology faculty at UTPA (Frank Judd and Bob Lonard) had published there multiple times. It was even more on my radar after attending the 2013 national conference of the ASPBA on South Padre Island.

The submission date on the paper says received 8 July 2016, but I hit “submit” in March. It was only through a haphazard “Hey, I wonder what’s the deal with my paper?” that I thought to log in to the journal’s manuscript review system, when I learned what was going on. The editor wanted me to fix a things in the manuscript to bring it in line with the journal’s formatting rules before it went out for review. But the submission system never generated an email to me from the editor saying, “Fix these.” Great. There’s a few months wasted.

But I do want to give the journal credit for things they did well. First, they did very intense copyediting, for which I am always grateful. There are always typos and errors and things that need fixing, and I never find them all on my own. And they drive me mad afterwards.

Second, Journal of Coastal Research is not known as an open access journal. There is no mention of open access publishing options in their (extensive) instructions to authors. But I asked about it during the copyediting and production stage, and was delighted to find that they did have an open access option. And the article processing fee was quite competitive.

I am glad to tell you the story of this sand crab paper, for I have another one to tell you about when it drops... tomorrow!


Faulkes Z. 2014. A new southern record for a sand crab, Lepidopa websteri Benedict, 1903 (Decapoda, Albuneidae). Crustaceana 87(7): 881-885. https://doi.org/10.1163/15685403-00003326

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. https://doi.org/10.1093/icb/icu064

Murph JH, Faulkes Z. 2013. Abundance and size of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae), in South Texas. The Southwestern Naturalist 58(4): 431-434. https://doi.org/10.1894/0038-4909-58.4.431

Nasir U, Faulkes Z. 2011. Color polymorphism of sand crabs, Lepidopa benedicti (Decapoda, Albuneidae). Journal of Crustacean Biology 31(2): 240-245. https://doi.org/10.1651/10-3356.1  

Related posts

Back to the start
1,017 days: when publishing the paper takes longer than the project
Way down south: stumbling across a sand crab (Lepidopa websteri)
Amazons and Goliaths, my #SciFund expedition blog, now available!
A pre-print experiment: will anyone notice?
External links

Are two years’ data better than one?

Metrics do not mean academia has been “hacked”

Portia Roelofs and Max Gallien argue that the high altmetric score for a dire article defending colonialism is evidence that academia has been “hacked.”

Academic articles are now evaluated according to essentially the same metrics as Buzzfeed posts and Instagram selfies.

The problem with their thesis is that article metrics are not new. They even discuss this:

Indexing platforms like Scopus, Web of Science and Google Scholar record how many other articles or books cite your article. The idea is that if a paper is good, it is worth talking about. The only thing is, citation rankings count positive and negative references equally.

I’m pretty sure that it has been literally decades since I read articles about using citation data as metrics for article impact. And one of the concerns raised then was about mindless counting of citation data. “But,” people would object, “if an article got a lot of citations because people were saying how bad it was?”

This is not a hypothetical scenario. Go into Google Scholar, and look at the number of citations for the retracted paper that tried to link vaccination to autism. 2,624 citations. Or the “arsenic life” paper, which has been discredited, though not retracted. 437 citations. By comparison, my most cited papers are in the low tens of citations.

The defence for using citations as metrics was that negative citations rarely happen. (I seem to recall seeing some empirical data backing that, but it’s been a long time.) But it was certainly much harder for people to dig down and find whether citations were positive or negative before so much of scientific publishing moved online. (Yes, I remember those days when journals were only on paper. ‘Cause I be old.)

Indeed, one of the advantages of the Altnetric applet is that it is trivial to go in and see what people are saying. Click on the recorded tweets, and you can see comments like, “So hard to read it without being outraged,” “Apparently not a parody: paper making 'the case for colonialism'. How does a respected journal allow this? Shocked.”and simply, “Seriously?!” Hard to find any tweets saying something like, “This is a thoughtful piece.”

It wouldn’t surprise me if the Altmetric folks are working on code that will pick out the valence of words in tweets about the paper; “excellent” versus “outraged,” say. Some papers are analyzing “mood” in tweets already (e.g., Suguwara and colleague 2017).

So the issue that Roelofs and Gallien are discussing is not a new kind of issue, although it could be new to the degree it is happening. But Roelofs and Gallien fail to show even a single example of the thing they claim to be such a danger: someone, anyone arguing that this is a good paper, a worthy piece of scholarship, because of its altmetric score.

It is fair to point out that any metric needs to be used thoughtfully. But I am not seeing any chilling effects, or even much potential for chilling effects, that they are so worried about.

A partisan news site can play to outrage and be rewarded because of their business models. They generate revenue by ads and clickthroughs. Academic journals are not like such news sites. They are not ad supported. They are not generating revenue by clicks and eyeballs. Academic journals exist in a reputation economy. They rely on reputation for article submissions, peer reviews, and editors.

For me, a bigger problem that journals might be rewarded from criticism by high altmetric scores is that journals can be so effectively isolated from criticism by publisher bundling (a.k.a “big deals”). It’s almost impossible for a library to cancel a subscription to one journal. (And yes, I know there are arguments for cancelling all subscriptions.)

18 September 2017

A pre-print experiment, continued

Over a year ago, I uploaded a preprint into bioRxiv. When people upload preprints, bioRxiv sensible puts on a disclaimer that, “This article is a preprint and has not been peer-reviewed.”

A little over a week ago, the final, paginated version of the paper that arose from the preprint was published. Now, bioRxiv is supposed to update its notice automatically to say, “Now published in (journal name and DOI).”

Perhaps because the final paper was substantially different than the preprint – in particular, the title changed – bioRxiv didn’t catch it. I had to email bioRxiv’s moderators through the contact form asking them to make the update.

The preprint was making more work for me. Again. It wasn’t a lot of work, I admit, but people advocating preprints often talk about them as though they take effectively zero time. They don’t. You have to pay attention to them to ensure things are being done properly. I want people to cite the final paper when it’s available, not the preprint.

Some journals are talking about using bioRxiv as their submission platform. This would be a good step, because it would remove work duplication.

I’m glad I’ve been through the preprint experience. But I am still not sold on its benefits to me as a routine part of my workflow. It seems all the advantages that I might gain from preprints can be achieved by other methods, notably publishing in open access journals with a good history of good peer review and production time.

Related posts

A pre-print experiment: will anyone notice?

13 September 2017

A look back at the crystal ball

I wrote the bulk of this post this post five years ago, back in 2012. That week, a paper came out in Nature that claimed to predict... the future! At least, it claimed to predict one part of my academic future, namely, my h-index:

At the time the paper came out, there was an online calculator. It hasn’t succumbed to link rot: it’s still there! I entered in the following values then:

  • Current h-index: It was 8 in 2012 (according to Google Scholar).
  • Number of articles: 24 (I only counted my original technical articles).
  • Years since first article: 20 (then; my first paper was in 1992).
  • Number of distinct journals: 20.
  • Number in “top” journals (a.k.a. the glamour mags): 0.

The program predicted my h-index now, five years later, would be 13. Since I used my Google Scholar data, I went back and checked my Google Scholar profile.

How did the prediction fare? Zooming in...

Holy cow!

Perfect. The prediction was perfect.

It’s a bit spooky.

Now I’m having one of those existential crises of whether my fate is set and whether there is anything I can do about it. As Ahab said in Moby Dick:

Is it I, God, or who, that lifts this arm? But if the great sun move not of himself; but is as an errand-boy in heaven; nor one single star can revolve, but by some invisible power; how then can this one small heart beat; this one small brain think thoughts; unless God does that beating, does that thinking, does that living, and not I.

The 2012 prediction reaches ten years forward, predicting an h-index of 21 in 2022. Of course, my publishing profile has changed in five years. I entered my updated data, and experienced my second existential crisis of the day:

My predicted h-index for 2022 has gone down from five years ago! The new prediction drops my 2022 h-index by 3 points! Argh! It does kind of make you feel like you’re failing at science.

Next, to schedule a post with this graph for 2022. We’ll see how close it is.

Related posts

Gazing into the crystal ball of h-index
Academic astrology

11 September 2017

Chasing pidgeys

In the game Pokémon Go, pidgeys are pokémon that you see everywhere. They’re super common, super small. They are not very powerful. You’d be hard pressed to win any gym battle with them.

When I started playing the game, I quickly stopped collecting them because, well, I had them already. And they seemed useless.

But I was wrong. And now I chase after them all the time.

There are a lot of different resources in Pokémon Go, but one is experience. You “level up” as a player with a certain number of experience points. One of the ways to get experience points is to evolve pokémon, and you get quite a lot of experience for doing so. It turns out that pidgeys are cheap to evolve. A few other pokémon are just as cheap, but they are much less common, and harder to catch.

Thus, what looks like something trivial and boring turns out to be one of the most reliable ways to advance in the game.

It occurred to me that this is a good metaphor for careers, including academic careers. Much of your success comes from chasing pidgeys: the boring, mundane tasks that you have to do a lot of, and that earn little recognition individually. Grading assignments, getting reviews back to editors, going to meetings, consistently working on papers.

(This post inspired by a student in General Biology who asked me what level I was at in Pokémon Go and whether I’d caught Raikuo yet.)

Picture from here.

08 September 2017

The Voynich manuscript and academic writing

The Voynich manuscript is a potentially obsession creating item. It’s routinely described with phrases like, “the world’s most mysterious book.” For more than a century, nobody could read it or make heads nor tails about what it was about. Debate raged about whether it was coded or just an unreadable hoax.

Until recently.

The book has, apparently, finally yielded to insightful scholarship and has been decoded.

(I)t was more or less clear what the Voynich manuscript is: a reference book of selected remedies lifted from the standard treatises of the medieval period, an instruction manual for the health and wellbeing of the more well to do women in society, which was quite possibly tailored to a single individual.

But what I want to talk about is not the solution, but about writing style and communication.

Here we have a century old mystery, solved. Here’s how I learned about it. A tweet from Benito Cereno that read:

Holy shit holy shit holy shit holy shit

The Voynich manuscript, decoded

You can feel Benito’s excitement in that tweet! This is so exciting, there’s no time for punctuation marks!

Now read Nicholas Gibbs’s first hand account of solving this mystery. Here’s the opening paragraph, which does use a good narrative structure, the ABT (and, but therefore) template (championed by Randy Olson):

For medievalists or anyone with more than a passing interest, the most unusual element of the Voynich manuscript – Beinecke Ms. 408, known to many as “the most mysterious manuscript in the world” – is its handwritten text. (And) Although several of its symbols (especially the ligatures) are recognizable, adopted for the sake of economy by the medieval scribes, the words formed by its neatly grouped characters do not appear to correspond to any known language. (And) It was long believed that the text was a form of code – one which repeated attempts by crypt­o­graphers and linguists failed to penetrate. (And) As someone with long experience of interpreting the Latin inscriptions on classical monuments and the tombs and brasses in English parish churches, I recognized in the Voynich script tell-tale signs of an abbreviated Latin format. But interpretation of such abbreviations depends largely on the context in which they are used. (Therefore) I needed to understand the copious illustrations that accompany the text.

But even with that good narrative structure in place, the opening paragraph shows so many of the problems of this article. Like many academics, Gibbs overloads on facts, with “and, and, and...” before we get to the “but.”

It’s about as devoid of excitement as you can imagine. This is a very careful walk through of the process. To use another of Randy Olson’s ideas, the “four organs of communication” (pictured; more in Don’t Be Such a Scientist) this description is all head (intellect). There’s nothing from the heart (emotion) or gut (intuition, humour). No emotion, nothing personal.

It’s disappointing.

Gibbs completely bypasses the intensity of interest in the strange book, of how many people have tried to crack it. “Repeated attempts” is so weak to describe a century long set of efforts to crack this this. It is such a typically cautious, couched language that is used in academic writing all the time.

And having solved a problem that so many people have brought so much talent and effort to bear upon, you might expect Gibbs to describe opening a bottle of champagne in celebration. Or maybe a beer. Or a description of the satisfaction he had from his insights – the “Aha!” moments, as it were.
Instead, Gibbs treats it with about as much enthusiasm as a walk to from the living room couch to the bathroom. 

You want to hear about the feeling of triumph of solving the puzzle, not just the step by step solution to it.

If you want to connect with people, you need the passion. You need the guts. You need the emotions.

Update, 9 September 2017: I’m seeing tweets from people grumbling that the Voynich manuscript probably hasn’t been solved. Nobody that I’ve seen has said why they doubt that the problem is solved. (Update, 10 September 2017: Ah, see here.) Regardless, that doesn’t change the points made here.

Related posts

Connection: Hollywood Storytelling Meets Critical Thinking review
Review: Don’t Be Such a Scientist

External links

Voynich manuscript: the solution
So much for that Voynich manuscript “solution”

Picture from here.

02 September 2017

Thank you, New Hampshire

It’s been a week since Harvey changed everything for Houston, Texas.

And since then, I’ve been waiting. After Katrina hit new Orleans, my university (then The University of Texas Pan American) offered enrollment to students affected by the hurricane. Since Harvey was hitting Texas, I expected that and more.

I emailed our president’s office, reminding them of what happened back in 2005. I got an email back from our Office of Emergency Preparedness, saying:

(UTRGV) has been in communication with University of Texas System... since last week. There are system-wide plans in place in the event student relocation becomes necessary.

I waited to hear what those system-wide plans were. I waited all week. All that happened at my institution was that the Atheletics department teamed with a Texas grocery store to fundraise. Hardly an institution wide response or plan.

Finally, University of Texas System Chancellor William McRaven writes this, titled, “Texans stop for no storm.”

This annoys me to no end. It feels like McRaven is taking this moment to do say, “Look how tough we are,” posturing instead of actually offering concrete plans for help.

On Twitter, the UT System account tweeted a Storyify about how institutions were helping people affected by Harvey. And this is nice, but it’s things like student organizations doing fundraising, universities offering counseling services, not institutions offering anything like what a New Hampshire university has done.

Franklin Pierce University will provide free tuition, room and board to up to 20 students for the fall semester.

That’s what I was expecting UTRGV and other UT System universities to do. But no.

Thank you, Franklin Pierce University, for doing for Texas students what Texas universities didn’t.

Related posts

Credit where it’s due

External links

New Hampshire university to take in students after Harvey
Texans stop for no storm

Picture from here.

29 August 2017

Why a proposed UTRGV doctoral program will probably struggle

When I took my current job, one of the things that attracted me was that I was told the department would probably have a Ph.D. program, maybe in about five years. It’s been a lot more than five years, but a biology related Ph.D. is finally on the horizon for my university. This should make me happy. It does not.

Last week, the UT System tweeted:

.@utrgv Pres Bailey looks to create PhD in Cellular, Molecular & Biomedical Sciences and Doctor of Physical Therapy programs. #UTRegentsMeet

And yesterday, it was confirmed that the university has the go ahead for preliminary planning for this proposed doctoral degree.

I want to go on record as to why I think this is not a good idea. (You get tenure in part so you can make these kinds of analyses.) For context, I have been the graduate program coordinator for biology at this institution for over a decade. So yeah, I know the backstory here.

First and foremost, the primary issue I have with the proposal for this degree is that it is being driven by institutional wants. Not to meet clear needs in the community. Not students’ interests. Not faculty research strengths. The university is trying to get to ten doctoral programs as fast as it possibly can, so it can meet the criteria for an “emerging research university.” Getting to that number of Ph.D. by any means they can is more important than coming up with a program that has faculty support and that will ultimately serve the students.

Second, the proposed program – “Cellular, molecular, and biomedical sciences” – might as well say, “and the kitchen sink.” There is no theme or connection there. There is no department of “cell, molecular, and biomedical sciences.” It seems like the plan is to conscript any faculty member in any department that knows how to use a PCR machine. With no single department to house the program, there will be tremendous problems of organization and cohesion. It will be difficult to instill that intangible but critical sense of community.

Third, there are already four cell and molecular biology doctoral programs in Texas (not to mention broader general biology programs). They are at University of North Texas, UT Austin, UT Dallas, and one of our closest neighbours, UT San Antonio. There is an case to be made that the proposed degree would unnecessarily duplicate existing programs, which the Texas Higher Education Coordinating Board warns against.

Lastly, the graph that no administrator has an answer for is this one (from here):

The article has UTRGV president Guy Bailey talking about job growth projections from the Bureau of Labor Statistics. But knowing demand don’t tell you much unless you know the supply. Administrators will ignore the existing backlog in students being trained, and the growth of programs training them.

The Bureau of Labour Statistics lists one biology related occupation that requires a doctoral degree: “Biological science teachers, postsecondary” (i.e., professors). They project a total of 21,200 job openings from 2012-2022. Using 2011 data on doctoral production, the projected 10 year need can be met in less than three years at recent rates of doctorate creation at the national level.

The only other biology related occupation listed by the Bureau of Labor Statistics that requires a doctoral degree is animal scientists (1,200 job openings, 8.8% growth). Again, this need can be met by current levels of doctorate attainment in the United States.

There are not clear projections for how much demand there will be for biology doctoral recipients outside academia, because the Bureau of Labor Statistics does not separate “Life sciences” jobs on whether they require a B.S., M.S., or Ph.D.

I think the students and the region deserve a good doctoral program they can be proud of. Instead, we’re likely to get a rushed, rudderless, “me too” doctoral program that nobody asked for and nobody wanted.

External links

UTRGV gets green light to seek two new doctorate degrees

28 August 2017

A sense of community

Last week, Stephen Heard wrote a post about being paid for peer review that generated a lot of discussion on Twitter. A fair number of people were quite emphatic that they were not being paid, and a few seemed very grumpy about that.

Earlier in my career, I remember people complaining about the individual reviews they got, or how long reviews took. But I don’t remember people grumbling over doing reviews, or not being compensated for them. And I never heard complaints from editors about how many people were refusing to do reviews.

Now, I suspect part of this is just a shift in perspective. I hear more voices via social media than I did before, and hear more perspectives. I know people who are on editorial board on Twitter, which I didn’t before. Still, from my perspective, it feels like grumpiness over having to do peer review is a relatively recent thing.

It seemed to me that annoyance about doing reviews might be symptomatic of researchers having a degraded sense of community.

When you feel like you belong to a community, you just pitch in. You help. Not because you are paid to do so, but because it’s friends and neighbours and it’s just what you do to make your community a nice place to be.

I think people are refusing to do reviews in part because they don’t feel connected to the academic community. And I get why that would be: it’s a rough, competitive market for ideas now. The shortages in funds and jobs and everything else feels like it’s forcing people into a “me first, me only” mindset to try to survive.

People will complain about journals more when they don’t feel they those journals are part of their scientific community. Maybe this is why many academics have continued to support society journals, even as more and more of them get run by one of the big main “for profit” publishers.

I have been thinking a lot about community, too, because of things like university administration. This tweet went out last week, reading in part:

@utrgv President Bailey looks to create PhD in Cellular, Molecular & Biomedical Sciences.

Yeah, neither faculty nor students asked for that program. It certainly doesn’t make me feel part of a community in my own institution.

Same with graduate programs. I’ve seen some research that one of the biggest predictors of successful programs is that graduate students feel a sense of belonging. That is, of community. And while I tried to create that feeling in our graduate program, I have come to the conclusion I have failed.

This is one reason why science Twitter and the science online community has been important to me: because it truly does seem like a community. People offer ideas and support, for no reason, just because. Someone came up with the term “pocket friends,” which I think is a good phrasing. I’ve said to a lot of people that online conversations are real conversations. And online friends are real friends.

Update, 29 August 2017: This post was featured in today’s Daily Briefing in the Chronicle of Higher Education. Thanks to them!

Update, 30 August 2017
: Mike Taylor has a response.

External links

Can we stop saying reviewers are unpaid?

Picture from here.

25 August 2017

Big bad Harvey

I’m probably going to be mostly unaffected by Hurricane Harvey. The University of Texas Rio Grande Valley campuses are closed today and tomorrow. Currently, the forecast is for a few inches of rain and possible power outages.

But the rest of Texas isn’t going to fare so well.

Harvey’s going to be very, very bad.

It’s hard to believe that on Monday, nature was treating millions to probably her most beautiful sight – a total solar eclipse. And here we are on Friday, same week, with millions of people dreading one of her most deadly events.

I hope my friends and colleagues elsewhere in Texas and Louisiana stay safe.

Update, 27 August 2017: Houston this morning. Pic from here.

This is after one day, and there’s more days to come yet.

This tweet shows water flooding into a news building after it rose a foot in 15 minutes. Terrifying.

External links

Hell and high water (From 2016, on why Houston did not get more prepared for a hurricane like Harvey after Ike in 2008)
Hurricane Harvey aims for the Texas fracking boom’s favorite port

21 August 2017

Eclipse 2017

It’s eclipse day, 2017!

Here in South Texas, we only got a partial eclipse. I’ve seen a total solar eclipse (in 1979, the last one over North American before today). and a partial one pales in comparison. Seriously, if you ever get a chance to see a total solar eclipse, go. It’s stunning.

Today, the main thing I did was play around with making a pinhole with my hands. I was surprised at how easy it was to see the crescent of the sun, with a distinct “bite” take out of it. Here’s the image with a bit of processing:

And close-up, looking more like the moon at night than the sun in day.

Related posts

A gift

10 August 2017

Me, 2017

My selfie game is not strong. And I generally feel super scruffy when I am out at the beach collecting. But I like how this pic of me in the field Tuesday came out.

I was scruffy, but the day was gorgeous. I was very lucky to be out on the beach that morning.

But this is a good opportunity to draw your attention  to Paige Jarreau’s project on scientist selfies. There is an Instagram account of other scientist selfies here, a Flickr collection; a Twitter hashtag, #ScientistsWhoSelfie (of which the picture above is one). And you can support her project on Experiment!

External links

04 August 2017

The Journal Loyalty Index

Happy coincidences make good prompts for blog posts. Earlier this week, I returned a review of a manuscript for a journal. It was one I had never reviewed for before, so I added it to a list of reviews I keep in my CV. It was the thirty-ninth  journal I had reviewed a paper for.

I was curious if this was a particularly high number. People often complain talk about how many reviews they do (or are asked to do), but not how many different journals are asking. I asked on Twitter. 39 different journals does seems to be on the high end:

Coincidentally, Stephen Heard published this post about how many journals he had published in, which he, following the example of twitchers, called a “life list.” This interested me, because I deliberately try to publish in as many different journals as possible. It’s a science macho thing: I want to see how many different editors I can fool convince.

As with reviewing, I knew about how many papers I had published, but not the journal distribution. Stephen had proposed the Journal Diversity Index (JDI) the number of journals divided by number of papers. Everyone has a JDI of 1.0 when they publish first paper, and it only declines from there.

My JDI is higher than Stephen’s, but several commenters in his post beat us both.

But this got me thinking about the two things together. What’s the relationship between the journals we publish in, and the journals we review for? I expected there to be substantial overlap:

After all. the whole reason you submit to a journal and review for a journal are the same: you have expertise in that. Moreover, editors look at their lists of authors when looking for reviewers, which I would expect would lead to greater overlap in the journals you publish in and the journals your review for.

I mean, it would be weird if there was almost no connection between your papers and your reviews, right?

But when I crunched the numbers, I was a bit surprised. The sets were almost equal.

There were 18 journals I have published in, but have never reviewed for.

There were 15 journals for which I have both published and reviewed papers. What I was expecting to be the biggest category turned out to be the smallest category.

There are 24 journals I have reviewed papers for, but never published in. That the “review only” part of the Venn diagram is bigger than the “publish only” part of the diagram doesn’t surprise me, because my online friends on editorial boards are constantly talking about how hard it is to find reviewers for manuscripts.

Like Stephen, we can make a simple index. The number of journals you have both published a paper in (B), divided by the total number of journals you have interacted with editors (E). We’ll call it the Journal Loyalty Index (JLI).

I have 15 papers I have both published and reviewed for, out of 57 journals I have interacted with. Making my JLI (15/57) is a mere 0.26.

I of course curious how loyal other researchers are to their journals.

Related posts

Peer review pariah, update

External links

My journal life list
Journal life list

26 July 2017

World’s worst... scientific papers

I have a new project to share! Just for fun, I spent the last few days making another little ebook, similar to what I did with Presentation Tips.

Stinging the Predators is a collection of deliberately horrible papers that were created to punk predatory journals. There have been six such pranks in the last two years. The most recent, which sort of triggered this project, was on Neuroskeptic’s blog last Saturday. Thinking about all the “sting” papers I’d seen over the years, it occurred to me that fake papers were practically their own emerging genre. And what better way to draw attention to a genre than with a curated anthology?

I collected all the sting papers I knew about. There turned out to be thirteen, and collecting them convinced me that it was useful to have all these examples in one place. Each paper has a short new introduction, and links to articles about it. I rounded off the collection with some short essays, some of which appeared here on the blog before, and a couple of which were new.

Once I got started with this project, I couldn’t let it go. I promised myself I would only let myself work on it for a few days, and then get back to work on writing that could be published by other people.

The ebook is available on figshare and on DoctorZen.net.

Update, 28 July 2017: After I posted the first version, I was reminded of another sting paper on Google Plus (see? It’s not a ghost town). I found another abstract after that. I decided to make a quick turnaround from version 1 to 2. There are now fifteen entries in this anthology.

The easy to remember link is http://bit.ly/StingPred. (Capitalization matters! “stingpred” will not work.)

Update, 31 July 2017: I know, two revisions in less than a week? I learned of another sting paper, and another conference abstract, bringing the total number of entries to 169 pages of mostly rubbish. (Some will probably say all is rubbish.)

Update, 7 August 2017: This little project is featured in Times Higher Education today and the Improbable Research blog!

External links

Stinging the Predators on Figshare
Predatory Journals Hit By ‘Star Wars’ Sting
Worst ever research papers revealed
“All these papers were deliberately bad”

18 July 2017

Rough rides at tenure time

Yesterday, I wrote about Dr. Becca’s tumultuous ride through her tenure process. (And thank you to all who read, like, tweeted, shared, and commented!) Becca has done many early career scientists a favour by documenting this difficult process.

I’ve talked from time to time about how important it is to share our failures. But we particularly don’t like to draw attention to issues that came up at tenure time. I wrote about my problems with tenure after I squeaked through the process. I was not writing a pseudonym, and I didn’t blog about the process much while I was going through it. I was very mad about it then. I don’t get visibly upset talking about like I used to, but I can’t say I’ve made peace with it. With the better part of a decade between then and now, I can see why I got a hard time at tenure, but I still feel I was not treated well.

Terry McGlynn had an even rougher time. He was denied tenure, which he wrote about extensively in the Chronicle of Higher Education.

But what I haven’t done specifically is to talk about what came afterwards for me. A lot of people think that after academics get tenure, they drop off and take it easy.

I got better.

After tenure, I finally had everything in place. The gears were turning, and I started to get the researcher coming out much more consistently, with more original data driven papers. And I had seen the adage, “Don’t let the perfect be the enemy of the good.” I lowered my standards and stopped waiting for projects to get that one last bit of data. And stuff started to happen for me. I became one of the most published faculty in the department.

I am not trying to brag here. I know many people would look at my research track record and deem it second rate (at best). “Sand crabs, Zen? Nobody cares about your sand crabs.”

The moral of the story? It’s to remind people that trouble, even at this critical point in an academic career, does not have to cripple the rest of your career.

And that publishing well is the best revenge.

Related posts

Now part of the problem
Low points
Nevertheless, she persisted

External links

Coming out of the closet, tenure denial edition

17 July 2017

Nevertheless, she persisted

Sometimes, you get to watch a friend win one. And that win is practically as sweet as one of your own.

Friend of the blog Dr. Becca has been getting a rough ride at tenure time. Until today:


First things first: Congratulations, Becca! I am so happy for you! Wooo!

Other things: Becca’s win is important beyond just the obvious significance for her and her students and collaborators. It needs to be seen and discussed widely for two reasons.

First, her case needs to be talked about because the grief she was getting was all about one thing: money. Scratch that: it was because she didn’t get the right kind of money. Her job was being threatened because she hadn’t brought in a stand alone research grant from the National Institutes of Health (an NIH R01, to use the jargon).

Becca’s situation is the nightmare scenario that many early career scientists are staring down. The NIH budget is flat, applications are up, and most recognize that the success rate in applying for NIH grants is now so low that many perfectly good projects go unfunded.

In other words, getting a grant has a healthy dose of luck to it and no amount of granting savvy can ensure you will pull down any particular grant. Lack of a grant does not mean your colleagues don’t think you’re doing crummy science.

Becca’s situation shows how dire and destructive this habit of “outsourcing” tenure decisions to granting agencies has become. Professors and administrators need to talk about this and adjust their expectations to line up with reality, and not expect the stone to give blood if you “incentivize” the stone enough.

This is something that has been buzzing in the background for a long time, but the situation has worsened in the last 6-7 years. Academics are used to stability at much longer time scales and aren’t prepare to adjust to the ground shifting underfoot in the time it takes to hire a professor to her tenure review.

Second, Becca’s case matters more generally than her alone because, as Neil Gaiman (channeling G.K. Chesterton) says:

Fairy tales are more than true: not because they tell us that dragons exist, but because they tell us that dragons can be beaten.

Becca shows that you can fight the dragons of university administration, and you can win. And a lot of early career academics need to know that. Because dragons are big and scary and it is easy to give up and concede the battle.

Becca was confronted with career dragons.

Nevertheless, she persisted.

Related posts

The secret life of a banner
The secret life of a banner, part 2

16 July 2017

The future is female

This year has seen something special. There’s been a hunger for new heroes. You can see it in these projects.

Hidden Figures. It challenged the pop culture juggernaut Star Wars at the box office, and got Oscar nominations, too.

Wonder Woman. The biggest hit of the summer, still going strong.

And now... the thirteenth Doctor.

It’s going to be fantastic.

Congratulations, Jodie Whitaker! I look forward to seeing you pilot the TARDIS and fight the monsters!

Added: Reaction to the latter.

13 July 2017

Five years for seven points of data

I was very excited yesterday. I got to add another data point to this graph:

It’s taken me five years to get those seven data points. Five. Years.

It’s not for lack of trying. Each data point depends on me catching a rare event. There’s a limited amount I can do to try to catch those rare events, so this graph is building up slowly. It’s not quite a pitch-drop experiment, but I am seriously wondering if I am ever going to have enough data that I will feel confident about publishing it.

I share this because there are a lot of people fretting about the speed of science these days. People want want fast review, and fast publication. Some are turning to pre-prints for greater speed. But sometimes, try as we might like, some questions force you to take a long, slow slog to get to the answer.

12 July 2017

The bat signal: Can cricket ears hear their predators?

(This was originally published here in 2005.)

Few events in animal behaviour evoke an observer’s visceral response as interactions between predators and prey, leading to poetic metaphors like, “nature red in tooth and claw.” The mechanisms through which prey avoid being caught and eaten provide some of the best examples of behaviours whose neural basis is reasonably well understood. For example, in fish, the Mauthner cells are key players in generating C-start escape responses (Korn and Faber 2005); in crayfish, the lateral and medial giant interneurons generate escape tailflips (Edwards et al. 1999). Surprisingly, however, our knowledge of when these well studied circuits are triggered by actual predators in the wild is rather limited, though those gaps are beginning to close (Herberholz et al. 2004).

Crickets have neurons that trigger escape responses, named AN2 (also referred to as Int-1). Unlike fishes’ Mauther neurons or crayfish’s giant interneurons, which can be triggered by a wide range of sudden stimuli, AN2 neurons appear to serve as detectors for one particular type of predator, namely echolocating bats (Nolen and Hoy 1984, 1987). While AN2 neurons respond to a wide range of sound frequencies, they are particularly sensitive to ultrasound, that is, sound frequencies that are too high for human ears to hear (Nolen and Hoy 1987). This is the approximately the same range of sound frequencies that echolocating bats use when foraging. But, as a recent paper by Fullard and colleagues (Fullard et al. 2005) notes, the key word is “approximately.” There are many species of bats, which differ in their foraging tactics, and emit a wide range of sounds as they do so. Most lab studies, for understandable reasons of simplicity and convenience, have used pure tones generated by computers to trigger crickets’ auditory neurons.

Fullard and colleagues studied Teleogryllus oceanicus, a cricket species found across much of the western Pacific. They recorded the calls of a half-dozen species of bats that share habitat with this cricket, then recorded AN2 neurons as they played back the bat calls at different sound intensities.

The crickets’ AN2 neurons responded to calls from all six bat species, if the sound intensity was 80 decibels sound pressure level (dB SPL) or more, although they did not react equally to all bat search calls.

Simply firing the AN2 neuron, however, does not determine if the cricket can avoid a foraging bat, because a single spike of AN2 is not sufficient to trigger an escape response (Nolen and Hoy 1984). By examining the pattern of firing in more detail, the authors were able to estimate how far away a bat call might trigger an escape response. Only calls by three of the bat species fired AN2 neurons strongly enough to generate escape responses before the bat would be aware of the cricket's echo.

If the AN2 is indeed a “bat detector,” it is reasonable to hypothesize that it has been shaped by natural selection to detect bat species living in the same habitat. All bat calls tested were from species that live in the same regions as T. oceanicus (i.e., sympatric species), but one might reasonably predict that AN2 should be less responsive to calls of bats that do not live in the same regions (i.e., allopatric species). That T. oceanicus has such a wide distribution, however, might mean that its auditory system has remained a bat “generalist.” Another prediction of the “bat detector” hypothesis would be that the bats that AN2 detects best would be those of species that are the most successful predators of crickets. In this case, the bat species Tadarida australis generated the greatest AN2 responses, raising the question of what the natural ecological interactions are between the cricket and the bat.

The bat species that is arguably the least conspicuous to crickets demonstrates the importance of understanding natural ecology in interpreting patterns of neural activity. Of the six species of bat whose calls were tested, the least conspicuous to crickets was Nyctophilus geoffroyi, because the echolocating calls of this species are too short and too high frequency for the crickets’ ears to detect reliably. The simple hypothesis might be that this bat species is a mammalian “stealth bomber:” by using echolocation calls that are almost undetectable by crickets, the bat would seem to be well equipped to pluck crickets from the air at will. Instead, N. geoffroyi seems to forage primarily by “gleaning,” i.e., locating insects by the sounds they emit and picking them off the ground (Bailey and Haythornthwaite 1998), a tactic that circumvents crickets’ tuned AN2 “bat detector” almost entirely.


Bailey WJ & Haythornthwaite S. 1998. Risks of calling by the field cricket Teleogryllus oceanicus; potential predation by Australian long-eared bats. 513. Journal of Zoology 244(4): 505-513

Edwards DH, Heitler WJ, & Krasne FB. 1999. Fifty years of a command neuron: the neurobiology of escape behavior in the crayfish. Trends in Neurosciences 22(4): 153-160.

Herberholz J, Sen MM, & Edwards DH. 2004. Escape behavior and escape circuit activation in juvenile crayfish during prey-predator interactions. The Journal of Experimental Biology 207(11): 1855-1863.

Nolen TG & Hoy RR. 1984. Initiation of behavior by single neurons: The role of behavioral context. Science 226(4677): 992-994.

10 July 2017

Goodhart’s Law

When a measure becomes a target, it ceases to be a good measure. - Goodhart’s Law

The longer I’m in academia, the more I appreciate the wisdom of this statement.

07 July 2017

Why I stopped writing grants

A couple of threads on Twitter recently reminded me of something. This from Liang Gao (my emphasis):

Just visited a new PI. He showed me beautiful research, top publications , and thousands pages of unfunded proposals. What the hell is going on?

Then there was Prof-like Substance:

Remember that when applying to NSF, this is what you’re up against. ~6% success from the process. Which is why I tell people over and over and over that if you don't diversify, you will get eaten alive. You. Can. Not. Go up against 6% success for a decade and think everything will be fine. It won’t.

Then there was this charming reminder from Jacquelyn Gill that in addition to dealing with biases about sex, race, and “academic pedigree,” you have to deal with biases about geography:

An equipment grant I'm a co-PI on is #NSFunded! I'm grateful for the chance to do some fun new research. But two reviewers mentioned how small UMaine is. One said it “only has 13,000 students.” Another said there’s “not much up there but moose.”

A lot of people are reaching the point I got to maybe four of five years ago, when I wrote:

Personally, if you’d asked me when I started this job if I thought that I’d be able to get grants for my research, I’d have said, “I think it’ll take me a few tries, but I think I can do it.” Well, that hasn’t happened. So I’ve had to re-invent myself, my expectations, everything, from almost the ground up. It’s been a decade-long battle to redefine myself as a scientist. I’m still not done.
I realized that producing thousands of pages of grant proposals was not satisfying for me, either personally or professionally. The odds were long and not improving. People probably think there’s less in South Texas than there is in Maine.

I also realized that managing those grants I did get were not satisfying for me. I’ve complained for a long time that trying to spend a dollar from a grant requires a bottle of aspirin, because it’s an instant headache.

So I mostly quit writing grants. I’m still writing some pre-proposals for NSF, but none have gotten an invitation for a full proposal.

Instead, I have focused on the bit that I find most satisfying for me: writing papers. I have focused on creating “$5 projects” that can go forward, grant or not. My research doesn’t run on money. It runs on willpower.

And I just submitted a manuscript to a journal today, thank you, that was generated with no grant support at all.