29 November 2016

Tuesday Crustie: Shiny


If I’d known Moana had a giant decorator crab in it, I’d have pre-ordered tickets.


Tamatoa also gets the second best song in the movie, although even it pales in comparison to “You’re Welcome.”

28 November 2016

Are footnotes a way to game the Impact Factor?


One of Bradley Voytek’s 99 problems is strange journal demands:

Major journal said we can’t cite biorxiv papers; instead must reference them via footnotes.

I have been rankled by journals’ refusals to cite non-traditional sources before. But this journal wasn’t refusing to acknowledge to a source. It was refusing to acknowledge a source in a certain way.

This puzzled me momentarily, but I have a hypothesis. Any time a journal talks about fiddling with citations, there is a prime suspect as to why: the journal Impact Factor. I strongly suspect that footnotes aren’t counted in the calculations of journal Impact Factor like terminal references are, even though footnotes and a reference list in this case would serve the same purpose: to credit a source so that people can find it.

What a journal might have to gain by keeping pre-print servers out of citations? It doesn’t enhance the journal’s own Impact Factor. It doesn’t enhance anyone’s Impact Factor, for that matter. Denying citations to pre-print servers seems futile, since pre-print servers don’t have Impact Factors.

While pre-print servers don’t have Impact Factors, including citations to them might make it easier to collect data about their use. There seems little doubt that the majority of citation analysis is done by text mining and algorithms, rather than by hand. (Notwithstanding the contention by Brembs et al. (2013) that Impact Factors are often negotiated.)

For journals, the very act of data collection about pre-print servers might feel threatening to them. There are some researchers who want journals to die across the board and wouldn’t mind if pre-print servers (or something like them) rose up to take their place. If it becomes clear through citation analysis that more and more studies on pre-printe servers are being cited as reliable sources of information, the uncomfortable question for journals arises:

“What are journals for, exactly?”

Update, 29 November 2016: Bradley Voytek reports that the situation has changed:

The journal editors discussed and changed their policies to allow preprints with DOIs.

How interesting.

References

Brembs B, Button K, Munafò M. 2013. Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience: http://dx.doi.org/10.3389/fnhum.2013.00291

Related posts

Why can’t I cite Mythbusters?

Picture from here.

No lead is safe


I normally tell people that I hate football. But homesickness makes you do funny things, so I tuned into the last quarter of the Grey Cup last night.

I was rewarded.

At some point, when the lead was still pretty big for Ottawa, one of the commentators said a CFL motto was, “No lead is safe.” I sort of snickered when I heard that. I would not have believed Calgary could score two touchdowns is less than two minutes, with the last coming in with something like 20 seconds left on the clock to force the game into overtime. What a thriller! At that point, it didn’t matter who won, you just had a great championship game.

The Redblacks are new, formed after I moved to the US. I looked a few things up about the team while the game was in progress. When I went to Wikipedia, I wondered if I was on some sort of time delay, because the entry said, “In the 104th Grey Cup, the Redblacks brought the Grey Cup back to Ottawa for the first time in 40 years.” What...? But... but... the game was still going.

When I cam back a few minutes later to get a screen grab of the jumped gun, I saw someone already had some fun with Wikipedia (click to enlarge):


Of course, Redblacks had the last laugh on the irate wiki contributor, pulling off the overtime win.

Even this football hater can’t resist an underdog victory. Congratulations to the Redblacks on their first Grey Cup win!

External links

Redblacks pull off huge upset to win 104th Grey Cup in OT
Redblacks player lays on the field long after everyone leaves, perfectly wrapping up the Grey Cup

22 November 2016

Watch me now


A “watchlist” has one major job: to intimidate. And boy, Professor Watchlist does that in spades.

The mission of Professor Watchlist is to expose and document college professors who discriminate against conservative students, promote anti-American values, and advance leftist propaganda in the classroom. 

And just like that, we’re in a new era of McCarthyism. The website is horribly vague on what confirmation or vetting goes into this list, what an “anti-American” value is, or what constitutes “leftist propaganda.”

I agree with one thing on this list: professors shouldn’t discriminate against conservative students. Because professors shouldn’t discriminate against anyone.

But hey, conservative students, your ideas have to compete in the free market of ideas and be supported with facts and evidence. That is, conservative students, you don’t get to cry “Discrimination!” if I say, “Evolution is the best scientific explanation we have for diversity of life on this planet” because you happen to be a conservative young Earth creationist.

The website says it’s a project of TurningPoint USA, but the link to it is not always predictable. A link on Twitter informed me that this is the brainchild of one Charlie Kirk.

I clicked on one entry at random, and it linked out to a site called Campus Reform, which I’m pretty sure I’d seen before. It’s part of the Leadership Institute, which describes itself as “Training conservative activists, students, and leaders since 1979.”

I agree with Dr. Becca. Universities need to talk about sites like this, publicly. I was also toying with something Trina McMahon did: flooding the site with “tips.”

And perhaps this is an apt moment to repost this:


Update, 23 November 2016: One of the creators of Professor Watchlist, Alana Mastrangelo, is super happy that people have taken to trolling the “tips” section of the website. Free publicity.

Another creator of Professor Watchlist, Crystal Clanton claims that the tip line to inform the site’s creators, “I pray for your deaths every day.”


To anyone who would consider writing something like this:


You’re not helping. Knock it off.

External links

Professor Watchlist

The 21-Year-Old Becoming a Major Player in Conservative Politics (from 2015)
David Perry discussing the Watchlist
Heather Cox Richardson on being added to the Watchlist
Exposing 'crazy radical professors': 12 of the best #trollprofwatchlist tweets
Professor Watchlist Is Seen as Threat to Academic Freedom
Teaching in a time of professor watchlists
Academic witch hunts are back: The new McCarthyism, a sign of the stupidity of the post-truth era

21 November 2016

Keeping to schedule

My university publishes an academic calendar of holidays and exams well in advance of the semester. The university is closed this Thursday and Friday (American Thanksgiving). But since last week I’ve had a steady stream of students asking me if we are having classes on Wednesday, and I even got one asking me, “Are we having class today?” (Monday.)

My answer is, “Yes, the university is open and class is happening as usual, as per university policy.”

“Other professors are cancelling class that day.”

I cannot tell you how much this annoys me. I’m not so much annoyed by the students asking, but by my colleagues.

Professors who cancel classes because it’s close to a holiday aren’t being professional. People in other jobs and other professions don’t just get to randomly not show up to work. But professors can cancel class pretty much whenever they want. And someone would probably need to cancel a lot of classes before a department chair or other administrator caught on and commented. This is the sort of thing that gets legislators breathing down our necks with arguments that professors have no accountability.

It bothers me because students are getting short changed. Students pay tuition for a certain number of contact hour, and they should be upset that they are not getting the instruction and face time that they are paying for. I suspect that few students think of it this way, probably because many still see their relationship with professors as an adversarial one. A cancelled class is just less work, rather than missed opportunities to learn. Unfortunately, professors who cancel classes because it’s close to a holiday set a bad example and encourage this “classes are just another thing I have to do” view.

So no, my classes are not cancelled this week. Because I am a professional who takes my obligations seriously.

18 November 2016

How I learned advanced math from a fake documentary

Q. Are pseudoscience shows like Ancient aliens having a negative effect on the scientific literacy of Americans? (From Quora.)

If you want to rank the biggest negative impacts on the scientific literacy of Americans, I would not put pseudoscientific television documentaries on basic cable at the top of the list.

If you look at the issues where the public disagrees with scientists the most (climate change, evolution, vaccines, genetic modification of food), it’s not because they don’t have access to facts or that basic cable documentaries have mislead them. It’s because those issues have become political issues, and political leaders and political pundits actively promote narratives that are not scientifically justified.

That said, sensationalist TV shows like this do have an impact, and that effect is probably generally negative. Andrew David Thaler writes about some of the long term effects here: The Politics of Fake Documentaries. See also: Fish tales: Combating fake science in popular media.

But.

Here’s the thing. This “ancient aliens” genre is not new. It reaches back at least to the late 1960s when Chariots of the Gods? was published, and the early 1970s, when books like this were published:


Now, I read that book as a kid. Yes, there’s a lot of rubbish in it, and I was pretty gullible. I thought a lot of it was plausible. Hey, what did I know, I was a kid. Did reading that book hurt my science literacy?

Well, in all the credulous interpretations of archaeological data (“This ancient gold trinket described as a bird, but it looks like a jet plane!”), there’s a chapter that I think was called “Shortcuts in space-time” or something like that. And that’s the chapter I remember most about that book. It introduced me to the concept of a tesseract. That’s real mathematics and real science, and it stuck with me because as I learned more, I learned that those ideas were real.

That book was unscientific. But it made me curious. And I learned some new science that I probably wouldn’t have otherwise been exposed to until university, if then. And when I got to university, I discovered The Skeptical Inquirer and learned to be a little less gullible.

I’m not arguing that those are a good means of science education. But they are works of art more than science, and art – and our reactions to it – are complex.

17 November 2016

Why is neuroscience teaching software so bad?


Neuroscience is a discipline that is very well suited to using computer models. There are all sorts of elegant mathematical descriptions of how neurons generate action potentials, how signals propagate along the length of a neuron, how signals from neurons add up and contribute to firing action potentials or not, and more.

So why do so many pieces of software created to teach neuroscience suck so much?

Now let me say this: the teaching value of the software is often excellent. The problem is that the implementation is rough, twitchy, and out of date. So maybe my question is better phrased as, why does neuroscience teaching software suck in the context of using them today, in 2016?

Let me give a few examples. This year and last I’ve used Swimmy (Grisham et al. 2008). Students have to crack the neural circuit that makes a fish swim. It is an awesome exercise that challenges my students intellectually.

But when it starts...


You’re presented an MS-DOS command line. 1990s memory whiplash right there. The interface consists of lots and lots of windows you have to resize manually. And the Mac version is so out of date that it doesn’t run properly any more.

I tried some software at The Mind Project, including Virtual EEG (Miller et al. 2008). Virtual EEG has a cool and interesting premise. You can create different sets of pictures (say, photos of objects and photos of people), and the program shows averages of real EEG data that was generated by people viewing those pictures. But, again, the interface is kind of clunky and twitchy. It’s written in Java, and it still runs, but I ran into some refresh issues such that screens often didn’t refresh and display properly. It all worked, but was such a chore to get to the stuff I wanted.

Realizing that these efforts were done the better part of a decade ago, I went looking for mobile apps.

I only found one promising candidate, Neuronify (pictured above). This one, at first glance, seemed very promising. It runs on Android and iOS. The user interface is very clean. But it feels more like a neurophysiological sandbox for playing around than a teaching tool. You build stuff rather than being puzzles to solve. The commands are very limited. You can inject current into a cell, but you can’t specify by how much, for instance. I’m sure I could put it to good use, but I need to think about how to use it effectively.

The contrast between teaching software and textbooks is profound. Textbook publishers have massive teams keeping the content and presentation up to date. There are new editions every few years. Say what you will about the cost, nobody would deny the typical university textbook is a professional looking, polished document.

Compared to the effort that goes into textbooks, most teaching software feels like the equivalent of a bunch of photocopied pages, printed off an old dot matrix printer, stapled together. It’s done by a small team, done once for some fairly specific teaching purpose, and nobody invests any effort in keeping up to date after it’s out and some small paper in an educational journal is published. So even those of us who decide to use the software have to pay the pixel tax.

I want students to struggle. But I want students to struggle with the inherent complexities of cellular neuroscience. Students don’t type in command lines to run Pok√©mon Go or Snapchat, and I don’t want them struggling with command lines in class. It’s the least important thing of all.

References

Grisham W, Schottler NA, Krasne FB. 2008. SWIMMY: Free software for teaching neurophysiology of neuronal circuits. Journal of Undergraduate Neuroscience Education 7(1): A1-A. https://mdcune.psych.ucla.edu/modules/swimmy/swimmy-extras/grisham-etal-junef2008.pdf

Miller BR, Troyer M, Busey T. 2008. Virtual EEG: A software-based electroencephalogram designed for undergraduate neuroscience-related courses. The Journal of Undergraduate Neuroscience
Education 7(1): A19-A25. http://june.funfaculty.org/index.php/june/article/viewFile/629/628

External links

Swimmy
The Mind Project (including Virtual EEG)
Neuronify
A pixel artist renounces pixel art

16 November 2016

The academic equivalent of “voter fraud”

In the recent American federal elections, some jurisdictions had relatively new legislation ostensibly aimed at preventing voter fraud. Critics argued that additional restrictions and costs made the laws more effective at blocking citizens with a legitimate right to vote than preventing people from voting illegally. And that the citizens preferentially blocked – usually poor people – were more likely to vote for the opposition party than the one that put the legislation in place.

And every study I saw indicated that voter fraud was vanishingly rare: 0.00000132% according to one study.

So huge effort is put into stopping a tiny number of potential cheats that possibly harms other honest participants in the system.

I’m wondering if the same can be said of higher education.

I was at a workshop for a teaching technology update yesterday. And when I’m at workshops like this, someone invariably launches into some sort of scenario – often quite an elaborate one – where students could use the technology to cheat.

It’s not that they necessarily have seen students cheating in the ways they describe, but they are worried that they could. So instructors go through contortions and set up elaborate safeguards and barriers and obstacles to try to catch cheats on the premise that cheating is pervasive.

The problem of cheating is not a problem created by students. The problem is created by higher education’s need to push a huge number of students through the system with as few instructors as possible. This forces instructors to use crummy evaluation techniques, like multiple choice pencil and paper exams.

As Yung Tae Kim reminds us, there is no cheating in skateboarding. You are evaluated on actual, immediate, real world performance, not pencil and paper tests. What would cheating in skateboarding even mean?


External links

The Misleading Myth of Voter Fraud in American Elections

Voter Fraud: Non-Existent Problem or Election-Threatening Epidemic?
Study Finds No Evidence of Widespread Voter Fraud
Yung Tae Kim: Skateboarding Physicist & Educator
Physics of Skateboading

Image from here.

15 November 2016

Peer review pariah, update

One of the good things about having a long running blog is that you rediscover stuff you wrote and can update it.

Back, um, “some time ago” (six years now crap I’m old), I dealt with the question of whether peer reviewers are overburdened. That is, there are too many papers and not enough people willing to review them all. At the time, I was suspect of the claims that being asked to review one to three papers a month was normal.

This question came up again on Twitter today. Since it’s been a few years, I wondered if I was still a pariah. I got the impression I was being asked to review more now...


And I am. The trendline is definitely upward. But it’s still far less than the “one to three papers a month” figure that people were claiming. I might hit the “one per month” around 2022.

I was part of the reviewer “talent pool” in the early 2000s, but got very few invites. I am the same guy now as then, so what’s changed? I think I’m getting asked to do more reviews because of the time spent in the academic system. And I was, luckily, able to step up my own publication game around 2010, which may have contributed to my “name visibility” among editorial boards.

I know some people on Twitter who are on journal editorial boards, and they do indeed complain about finding reviewers. But I wonder how well editors use the available talent pool. I would bet that journal peer review invitations are biased against:

  • Faculty who are not at American universities. (Update, 21 November 2016: Warne (2016) reports proportionately more peer review is performed by American researchers than Chinese ones.)
  • Faculty who are not at English-speaking universities.
  • Faculty at undergraduate institutions.
  • Post-doctoral fellows and graduate students.
  • Women reviewers.
  • Minority reviewers.

Update, 21 November 2016: Table 1 in Okike et al. (2016) shows more than ten men for every one woman reviewing manuscripts. Hat tip to Laura Jurgens.

Update, 5 December 2016: This tweeted list of “Top reviewers” from the journal Neurospsychopharmacology has nine men and one woman. Hat tip to Bita Maghaddam.

Update, 23 March 2017: Big new study in eLife by Helmer and coleagues (2017) supports that the hypothesis that “women are underrepresented in the peer-review process.” This comes on the heels of a Nature article that also supports this hypothesis.

Update, 2 May 2017: Biochem Belle pointed out that Fox et al. (2017) showed that women were much, much less likely to be suggested by authors as reviewers. The highest year as only 25% in 2014.

Updated, 10 September 2017: Matt Hodgkinson tweets that for geology journals, women are underrepresented as suggested reviewers, and decline more often than men. But it is getting better.

References

Fox CW, Burns CS, Muncy AD, Meyer JA. 2017. Author-suggested reviewers: gender differences and influences on the peer review process at an ecology journal. Functional Ecology 31: 270–280. http://dx.doi.org/10.1111/1365-2435.12665

Helmer M, Schottdorf M, Neef A, Battaglia D. 2017. Gender bias in scholarly peer review. eLife 6: e21718. http://dx.doi.org/10.7554/eLife.21718

Lerback J, Hanson B. 2017 Journals invite too few women to referee. Nature 541(7638): 455–457. http://dx.doi.org/10.1038/541455a

Okike K, Hug KT, Kocher MS, Leopold SS. 2016. Single-blind vs double-blind peer review in the setting of author prestige. JAMA 316(12): 1315-1316. http:/dx.doi.org/10.1001/jama.2016.11014

Warne V. 2016. Rewarding reviewers – sense or sensibility? A Wiley study explained. Learned Publishing 29(1): 41-50. http://dx.doi.org/10.1002/leap.1002

Related posts

Peer review pariah

14 November 2016

Who’s going to hero up?


We tell the tale of heroes to remind ourselves that we also can be great.
John Wick, Legend of the Five Rings roleplaying game


Seriously, holy shit, how do you grow up loving Superman and think bullying is OK?

How do you read X-Men your whole life and think hating your neighbors because they are different is just fine?

How do you see every Star Trek episode and be riddled with xenophobia?

How do you have those people as your heroes your whole life, and then spit on everything they try to teach us? ...

My one slim hope is that maybe someday, something Spider-Man or Optimus Prime or Luke Cage said will get through and find a spark. Maybe be a reminder that most of our great fictional heroes wouldn’t behave like this.

They save people, they help people. They fight for people.


External links

How can you love Superman and still be an intolerant bully?
Superdames comics

10 November 2016

Favourite Quora answers

The search function on Quora sucks, and so I’m collecting some of my favourite answers here so I can find them again later.

Neurobiology

Evolutionary biology

Academic writing

Academic publishing


Skepticism

Academic careers

Academia and higher education

All quiet on the social media front

Back in July, I posted this graph:


It’s an appropriate time for an update:


Related posts

Reasons to go back to blogging

Competition

Question and answer originally posted at Quora.

Q: If a university were to be created today, how could it compete with MIT, Harvard, and other top-tier schools?

New universities can’t compete with established universities.

I’ve thought about this question a lot, because the university I now work at may be the newest university in the world. It was established September 2015, so is currently about 15 months old as I write this. It has aspirations to become an emerging research university.

One of a university’s major resources is prestige. Prestige attracts students and faculty. Prestige attracts donors. Prestige attracts attention. Prestige gets you the benefit of the doubt.

You cannot create prestige. Prestige is something others bestow upon you.

People perceive universities as prestigious because they are old and have a proven track record. Prestige has very little to do with the education of students or skills of faculty. I’ve met students and faculty from all kinds of universities, and their abilities and skills are not different enough to explain the huge differences in how their institutions are perceived.

Lists of new universities don’t have a 5 or 10 year window; a university 50 years old is still considered “new” (e.g., The 7 best US universities founded in the last 50 years or The World's Best New Universities).

There is no way a university created in the 21st century can amass the prestige of a university created in the 17th century. Age probably stops mattering when you get a century out from the establishment of the institution.

That said, you can certainly do a better or worse job of establishing a research university. You do that primarily giving faculty resources (particularly doctoral students to provide cheap labour) and keeping them as far away from undergraduate students as possible. Professors at research intensive universities might teach a class or two a year; colleagues in my university are typically teaching three courses a semester. This means research universities cannot admit lots of undergraduate students, meaning they are more selective, further feeding into the perception of prestige.

08 November 2016

Brush your teeth

(Note: I’ve been answering questions on Quora for a while, and this one has proved to have a “long tail”: it’s been slowly, but consistently, getting voted up. SO I’m putting it here for archival purposes.)

Q: For hundreds of thousands of years, humans did not brush their teeth. What happens if I stop altogether? At what point does my mouth start to take care of itself? Animals don't brush their teeth; how do they pull it off?

A: You can’t expect your mouth to “take care of itself” the way ancient humans did. Modern humans in advanced industrial societies have a different bacteria and different diet (probably more sugars and other carbohydrates) than ancient humans.

Ancient humans tended to have good dental health. Two research papers published in 2013 both suggest that human cavities started to take off about 10,000 years ago – roughly about the same times as we were moving to agriculture. They show this in part by comparing modern to ancient bacteria that live in our mouths.


Even if you could go back to the ancient diet, you are not going to be able to get the ancient bacteria to live in your mouth – they are long gone.

Just brush your teeth.

External links

For hundreds of thousands of years, humans did not brush their teeth. What happens if I stop altogether?