Building Canada’s Ocean Acidification Community

Kristina here–

When you think of carbon dioxide emissions, what comes to mind? For most people, that is probably something along the lines of fossil fuels, greenhouse gases, and global warming. But for me, I think about ocean acidification. Often referred to as “the other carbon dioxide problem”, ocean acidification, or OA for short, is a lesser-known by-product of excess carbon dioxide being released into the atmosphere. Between 25 – 30 % of the carbon dioxide produced since the Industrial Revolution has been absorbed by our oceans. This buffering capacity of the ocean has actually helped reduce some impacts of global warming and greenhouse gases, but, as we’ve discovered in the last decade or two, it has come at a great cost to our oceans.

Figure 1. Schematic diagram of ocean acidification. Image credit: Kristina Barclay
Figure 2. Sustainable Development Goal 14.3 – Reduce Ocean Acidification. Image Credit: United Nations

When carbon dioxide (CO2) enters the ocean, it reacts with seawater to form excess hydrogen (H+) and bicarbonate ions (HCO3). Increases in hydrogen ions are what makes liquids more acidic and reduces their pH, hence the term “ocean acidification”. But the main consequence of increases in hydrogen ions in seawater is that hydrogen ions bond readily with the carbonate ions (CO32-). Carbonate is naturally occurring in seawater, and it is a crucial building block for organisms that build calcium carbonate hard parts, like clams, oysters, lobsters, corals, and even the tiny plankton that serve as the base of the ocean’s food chain. The less carbonate ions available in seawater, the harder it is for organisms to make their hard parts. In the past 15 years or so, there has been considerable research demonstrating the negative effects of OA on calcifying organisms. These calcified structures can take more energy for organisms to form, grow smaller, slower, and/or weaker, or even start to dissolve! Increased seawater acidity can also affect organism survival, particularly in early life stages. On the west coast of the U.S., there have already been several seasonal mass die-offs events of oyster crops that have caused significant and repeated financial losses to the aquaculture industry, most likely attributed to OA.

As most societies, particularly coastal communities, depend on the oceans for both food and livelihoods, monitoring and mitigating OA has become a global priority. The UN has declared the next decade (2021 – 2030) the Decade of Ocean Science for Sustainable Development. Many countries, including Canada, have committed to the Ocean Decade and its Sustainable Development Goals (SDGs). OA is directly addressed in the Ocean Decade plan under SDG 14.3 – to “minimize and address the impacts of ocean acidification, including through enhanced scientific cooperation at all levels”. To this end, the Global Ocean Acidification Observing Network (GOA-ON) has created a database where researchers can make sure their data adheres to SDG 14.3.1 methodologies and then contribute their data to this global OA database. There are also many other national and international OA groups that have been created in recent years to help create and share OA knowledge and research.

Canada and Ocean Acidification

Figure 3. Sustainable Development Goal 14 – Life Below Water. Image Credit: United Nations

Canada faces several unique challenges with respect to OA. First, we have the largest coastline of any country in the world. Second, Canada is more vulnerable to OA given our latitude and colder ocean temperatures, as carbonates are naturally more soluble in colder waters. Thirdly, Canada is surrounded by three connected ocean basins, each with unique properties that make them vulnerable to the effects of OA. In the Pacific, OA is exacerbated by seasonal upwelling, where deep, naturally acidic ocean waters are forced to the surface by wind patterns. The Arctic is vulnerable due to rapidly increasing freshwater input from melting sea ice and glaciers from warming temperatures (freshwater is more acidic than seawater). In the Atlantic, OA is exacerbated by ocean mixing patterns and freshwater input from the Arctic. Finally, Canada’s coastal communities, of which there are many given our extensive coastline, are socioeconomically vulnerable to OA.

As a country, Canada is contributing to regional, national, and global OA research efforts through several means, such as independent research projects, local community action plans, and through our federal Fisheries and Oceans department (DFO), just to name a few. But Canada is a big country, and it can be hard to connect across such a wide geographical area. This is where our Ocean Acidification Community of Practice (OA CoP) comes into play. Funded by Canada’s Marine Environmental Observation Prediction and Response network (MEOPAR), the OA CoP is one of several MEOPAR Communities of Practice. The goal of MEOPAR CoPs is to facilitate knowledge mobilization and integration by uniting groups with shared concerns on particular topics (in our case, OA).

Figure 4. Canada’s Ocean Acidification Community of Practice Logo

Our community was initiated in 2018, and is comprised of two Co-Leads from academia and government science, a coordinator (me), and an interdisciplinary Steering Committee consisting of experts from industry (aquaculture and fisheries), academia, DFO, and NGOs at all career stages (student representative to senior-level management) and from all across the country. Our goals as Canada’s OA community are to coordinate across all sectors and disciplines to share OA expertise and data (particularly to end-users), identify pressing needs for OA research/knowledge in Canada, and foster a collaborative and supportive environment for groups affected by OA. We also act as the Canadian leads for international collaborations and OA research efforts, such as GOA-ON, the OA Alliance, and the OA Information Exchange.

Anyone who is interested in or affected by OA in Canada is welcome to join our community. We currently have over 170 members, including individuals from aquaculture, fisheries, and NGOs, academics, federal and provincial government scientists, Indigenous community leaders, graduate students, and members of other international OA organizations. Members receive our quarterly newsletters, and updates on any upcoming events that might be of interest. We also encourage our members to join Team Canada and participate in the OA Info Exchange, an international forum that is a great place to discuss and share new ideas, research, and see what experts from around the world are doing to address and learn about OA.

What do we do?

Figure 5. A graphic I made (using Canva) to advertise our new Map of Canada’s OA Resources

As the OA CoP Coordinator, my job is to keep growing our community, seek new research and community-building opportunities, facilitate our involvement in the broader global OA community, provide, maintain, and create new resources for our members, and stay updated on the latest OA research and news. Here are some of the things I’ve been working on for Canada’s OA Community.

Canada’s OA Website

One of our biggest activities has been to create a website that acts as a central hub for all of the resources we’ve gathered for Canada’s OA community. The website, oceanacidification.ca, is always growing, and we regularly add new OA resources and materials. The COVID-19 pandemic has taught us all the importance of online resources, so a large part of my focus over the past year has been to develop new online content for our community that will allow us to connect, even if we are unable to gather in person for regional workshops. The goals of these new resources are to help increase awareness and engagement with our community, and further our CoP objectives.

Our Map of Canada’s OA Resources

On the International OA Day of Action (January 8, or 8.1, the current pH of our oceans) this year, we launched an exciting new resource, an interactive map of Canada’s OA Resources, where visitors can search for OA projects, experts, and resources from across Canada, or browse the resources available in their area. We update the map regularly to make sure our community has all the latest information.

Our New Blog Series

Figure 6. Examples of our social media posts. Made by Kristina using Canva.

In December, we launched four new blog series aimed to increase engagement and awareness, and provide new resources for our community. The first blog series, OA News (You Could Use), is a weekly snapshot of OA news and activities happening around the world. Posts contain 3 – 5 OA-related news items, including upcoming events, news stories, recent publications, and new resources. The second series is called Research Recaps, where we interview researchers, particularly early career researchers, to get an inside perspective on their recent publications. The posts are written in accessible language, allowing a wide audience to get a glimpse of how the scientific process works, and how researchers create new OA knowledge. The third blog series is called Scientist Spotlights, where we interview individuals to learn more about their research backgrounds and interests in OA. These posts allow the average person to learn more about why researchers are interested or motivated to study OA-related subjects. Our fourth series, Meet the CoP, is similar to our Scientist Spotlight series, but we interview our leadership team to learn more about why they are motivated to lead Canada’s OA community. The goal of the Meet the CoP series is to inspire and help us understand why the OA research and our community matters to Canada. A lot of my inspiration in creating these four blog series came from working with Time Scavengers.

Social Media

I’ve been working to increase our online social media presence since October, 2020, posting at least 3 – 4 times a week on Twitter, and 1 – 2 times a week on Facebook and Instagram. Using some of the things I’ve learned volunteering with Time Scavengers, I’ve started to try out different visual graphics to go along with our posts to see what is appealing to viewers. An interesting trend I’ve noticed so far is that while we get the most engagement on our Instagram posts (likes), Twitter is the predominant source of our social media web traffic, and is our third most common source of web traffic (behind direct visits and google searches).

Figure 7. Growth in our social media followers since October, 2020. Twitter appears to be our most useful platform.

Ongoing and Future Projects

One of our biggest projects that we are hoping to start working on this summer (funding and COVID dependent) is our Critical Ocean Acidification Sensor Technologies for Coastal Industries and Communities (COAST to Coast) OA sensor package. The plan is to partner with aquaculture operators to deploy OA sensors that will not only allow us to contribute to larger OA monitoring efforts, but might also allow operators to determine and predict OA events. Another goal of the sensor package is to assess the viability of newer, lower cost sensors, as most of the well-established OA sensors are very expensive, which is cost-prohibitive for individual aquaculture operators. We are also working on a couple of research papers, including meta-data analyses of OA research in Canada, and regional OA vulnerability assessments in partnership with both DFO and NOAA’s joint OA Working Groups, that will include biological, physical, and socio-economic data. I’ve been collecting and using the meta-data I gather to make a database of Canada’s OA publications as well that we hope to release in the coming months.

What I’ve Learned

It has been a great experience getting to work with such an interdisciplinary group to learn more about the many disciplines involved in OA research. While a lot of my Ph.D. research involved the effects of ocean acidification on molluscs and their shells, as a palaeontologist, I typically think about OA from a deep-time, biological perspective. In this role, I’ve thrown myself into the modern world of OA, and learned about everything from government and interagency science, to policy, oceanography, chemistry, aquaculture, fisheries, social science, and more. I’ve been able to meet and listen to OA experts from around the world, including and Mexico and the U.S., as well as countries in Europe, Africa, South America, and Central America. The international OA community is really welcoming and collaborative. I’ve also learned a lot about chemical oceanography and carbon cycles in the Arctic from the lab where I am a postdoc.

I’ve been able to apply and grow my skills in science communication by getting to interview and interact with so many people who all think about OA so differently. I’ve had a lot of fun interviewing researchers and writing blog pieces, as well as facilitating conversations with groups from all different sectors. It has helped me become a more well-rounded scientist and science communicator. As someone who is interested in conservation palaeobiology and the implications of the fossil record for modern conservation and climate change issues, being able to “speak the language” of a wide range of modern scientists and stakeholders is also a valuable skill when trying to identify research priorities, build collaborations, or seek funding opportunities. My experiences working with Time Scavengers have also helped me think of new and creative ways to help grow our OA Community in Canada.

If you are interested in learning more about Canada’s Ocean Acidification Community of Practice, please visit our website, and consider becoming a member.

To learn more about the science of OA and ocean chemistry, Check out this Time Scavengers webpage.

Acknowledgements:
Thank you to OA CoP Co-leads, Dr. Helen Gurney-Smith and Dr. Brent Else for reviewing this blog post.

Inferring Phylogenies

Jen here – 

Are you interested in understanding how we take morphological data from extinct animals and use them to infer an evolutionary history? We often think of and visualize relationships as trees, this includes your family tree. We have an entire page on Reading the Tree of Life so you can understand how to read and interpret these visualizations. These trees, called phylogenies, can be used as a framework to test different macroevolutionary questions regarding species distribution, paleoecology, rates of change, and so much more! We hope to set the stage to explain how each step is done! 

Before really diving into anything specific, I would suggest you think a little about evolution, phylogeny, and all the basic terminology that builds the foundation for understanding evolutionary theory. I would recommend that you work through The Compleat Cladist: A Primer of Phylogenetic Procedures. This is effectively a workbook that walks you through terms, concepts, and more!

This isn’t meant to be an exhaustive guide but rather set you up to explore the programs and infer a phylogeny! Now that you have learned all you can about your study organism and how to build a character matrix the next step is inferring a phylogeny. 

What does it mean to infer a phylogeny?

Simply, evolutionary scientists can take a data matrix and apply mathematical and statistical models to estimate, or infer, species relationships to generate a phylogeny (evolutionary history). In paleontology, the data are generated by an individual’s understanding of homologous characters in the group and are inherently biased to their expert knowledge. Homology is the similarity due to inheritance from a common ancestor. As such, the researcher is presenting a phylogenetic hypothesis for the group.

It is  important to understand the purpose for pursuing any scientific approach. Why paleontologists should pursue building and inferring phylogenies  is well described by Brian O’Meara in his PhyloMeth video on Why build phylogenies? In essence, tree topologies not only tell us about how organisms are related to one another but they can be used as a framework for a variety of macroevolutionary approaches. 

To get an idea of the basics of tree space, please watch this video, Be afraid of tree space, by Brian O’Meara to get you excited about trees.

What are the methods?

These are several of the primary methods currently being used in phylogenetic paleobiology. There are certainly more methods and we encourage you to explore and learn on your own!

Maximum Parsimony

Parsimony, similar to Occam’s razor, suggests that the simplest explanation that fits the evidence is the best. Applying this logic to evolutionary trees means that the best inference or hypothesis is the one that requires the fewest evolutionary changes – or character changes across branches. 

Software: PAUP*, TNT

More reading:

Maximum Likelihood

Likelihood methods provide probabilities of the data given a model of their evolution. The more probable the data given the tree, the more the tree is preferred overall. Because the model is chosen by the user, this method can be employed for a variety of situations. 

Models of evolution in paleobiology include: Jukes Cantor (JC), Felsenstein (F81) but there are many others. Here is an entire chapter on Selecting Models of Evolution by David Posada

Software: PAUP*, RAxML, see Bayesian software list, you can use those as well. 

More reading: 

Bayesian Estimation

Similar to Maximum Likelihood, Bayesian estimation is based on the probabilities of the data given a model of their evolution with the addition of prior beliefs.

Software: RevBayes, MrBayes, BEAST

More reading: 

How do you select a method?

Why not try them all? Paleontology has been slow to adapt the statistical models to better suit our character data and there are many mindsets that are stuck on ‘this is the best way’. However, until you attempt and try each method it is hard to say one is ‘better’ than the other. Some methods may provide a route that is more closely aligned with how your clade evolved through time. Maybe one is more flexible for your dataset, maybe you get the same answer with multiple methods, or maybe you realize something new about your dataset from running multiple scenarios. 

More reading on support and comparing methods: 

General resources and further reading: 

Subscribe to the PhyloMeth YouTube channel and watch pervious lectures and discussions and different aspects of phylogenetic methods.

Paris Agreement 101

Shaina here –

On February 19, 2021 the United States officially rejoined the Paris Agreement. This is an important shift in US climate policy so let’s go over what it means and what the Paris Agreement is! 

What is the Paris Agreement?

It is an international agreement to address climate change under the auspices of the United Nations Framework Convention on Climate Change (UNFCCC). The stated goal is to keep the rise in global mean surface temperature to below 2℃ and ideally below 1.5℃. The agreement was adopted in 2015 at the 21st Conference of the Parties (COP) to the UNFCCC and agreed to by 196 countries.

What is the history of the Paris Agreement?

The formal history within the UN began in 1992 with the creation of the United Nations Framework Convention on Climate Change. The UNFCCC has established the vague goal of reducing greenhouse gas emissions to prevent ‘dangerous anthropogenic interference’ (DAI) with the climate. Over the years there were many efforts that took place under the UNFCCC to achieve this, such as the 1997 Kyoto Protocol which called for binding emissions reductions for certain countries over a short time period. One of the main issues with trying to avoid DAI is that what defines danger has different meanings for different people in different places. This meant that finding a goal that diplomatic representatives from all involved countries could agree on was rather challenging. A long and meandering path led to the decision to adopt the 2℃ (and hopefully 1.5℃) temperature target, and eventually to the Paris Agreement.

The US involvement in the process that led to the Paris Agreement is very complex. As the world’s largest historic greenhouse gas emitter the US had a lot of power during negotiations. Any international action aimed at addressing climate change must have the involvement of large emitters in order to be successful, however large emitters became that way through reliance on fossil fuels— and relatedly slavery and colonialism— and thus have an interest in seeing the use of them as an energy source continue, despite the urgent need for production to decrease. US negotiators worked to ensure that rather than avoiding binding emissions reductions the agreement instead had self defined commitments, and also that it avoided requiring things like liability for loss and damage resulting from climate disasters.

How does it work?

The Paris Agreement does not require binding emissions reductions meaning that counties are not actually required to reduce emissions by a certain amount at a certain time, nor are they required to tie their plans to address climate change to their historic emissions. Rather countries are only bound to participate in the process outlined in the agreement. That process consists of several steps. First, countries each come up with their own individual plans, called Nationally Determined Contributions (NDCs) for how they want to address climate change. These plans can be a combination of mitigation, adaptation, finance, and technology transfer. Then every 5 years they reassess and hopefully ramp up their action plans. Ideally each iteration brings them closer to net zero emissions by mid century (the term net here gives a ton of wiggle room for things like market mechanisms that may or may not actually lead to emissions reductions).

How is it working out?

To be honest, rather poorly so far. It has been five years since the Paris Agreement was ratified and during that time emissions, greenhouse gas concentrations in the atmosphere, and temperatures have continued to rise. While there was a slight decline in emissions in 2020 due to the COVID-19 pandemic (Le Quéré et al 2020), that decline was not a result of countries taking action on climate change, but rather of the emergency lockdowns. The pledges countries have so far submitted would put us on track for around 3°C of warming by the end of the century. The annual COP meetings are where negotiations for Paris Agreement implementation happen, however the COP meeting that was supposed to take place at the end of 2020 was cancelled (youth held their own in its place). Countries were still required to submit updated NDCs by the end of 2020 and then negotiations will continue at COP26 in November.

What does the Paris Agreement say about climate justice?

To be honest with you, dear reader, this part irritates me. There is only one mention of climate justice in the Paris Agreement and it reads: “noting the importance for some of the concept of “climate justice”, when taking action to address climate change”. Climate justice is a term used to encapsulate the many ways that a changing climate is related to sociopolitical inequality across many scales- this can include the ways climate impacts disproportionately impact marginalized populations, the ways historic emitters have had an outsized contribution to creating the problem, and much more. In my opinion, and I am sure many of you would agree, justice is one of the most fundamental, if not the most fundamental, issue at play in the climate crisis. But it is only mentioned in passing here and as only being important “to some”. Many scholars have addressed shortcomings with the Agreement with respect to climate justice (I wrote a chapter of my own dissertation that will add to this body of knowledge), however despite its shortcomings and lack of robust consideration of justice the Agreement is currently the best hope we have for a coordinated international response. And we desperately need that. So this is where the general public can play a large role- we can advocate for policies in our countries and communities that will center justice as a way of bringing this concept to the forefront of the conversation.

What happens after the US rejoins?

The Biden administration will need to submit a new NDC with a renewed pledge. The pledge that was submitted under the Obama administration was considered ‘insufficient’. Then the Trump administration withdrew from the Paris Agreement (moving us into ‘critically insufficient’ territory) and worked to undermine climate action at every opportunity with numerous environmental policy rollbackss, deregulations, and anti-sciencee rhetoric. So Biden will need to submit something truly ambitious, and much stronger than what was done under the Obama administration. It will be important that they not only make an ambitious plan but that they show immediate progress towards justice centered emissions reductions. Their NDC will likely be based around Biden’s climate plan, which does look ambitious, and what they submit to the UNFCCC will need to be compatible with giving us the best possible chance of staying below 1.5℃ of warming in order to show that they are fully committed to justice and climate action. 

Rejoining the Paris Agreement is a necessary step for the US to get back on track with the international effort to address climate change. However we will need to watch closely over the next few months to see what the submitted NDC looks like and what concrete steps are being taken immediately to put those plans into action. 

For now, let’s celebrate this win and do all we can to ensure that this is successful!

References:
Le Quéré, C., Jackson, R.B., Jones, M.W. et al. Temporary reduction in daily global CO2 emissions during the COVID-19 forced confinement. Nat. Clim. Chang. 10, 647–653 (2020). https://doi.org/10.1038/s41558-020-0797-x

The Scientific Process: What is “Peer-Review”?

Kristina here –

Today, humans have access to more information than at any other time in human history, all at the tips of our fingers with a quick Google search, or a “Hey, [insert AI name here]”. While equal access to the internet and information technology is beyond the scope of this post, cell phones, tablets, and laptops, have made it easier than ever to quickly look up information. Yet with this technology has come a huge surge in wide-spread misinformation, making it difficult to know whether you can trust the information you find. Pretty much anyone can post whatever they want, and pass it off as “fact”. How then can the average person determine whether what they’re reading is actually credible and factual? Furthermore, if you see something that says “scientists disagree on X topic”, who should you believe? Contrary to what you might think, not all viewpoints are created equal, and both scientists and the average person can be guilty of confusing “opinion” and “fact”. This is where the “peer-review” process comes in to help.

So what is “peer-reviewing”?

Most people hearing “peer-review” assume it is a good thing (and this is certainly true) but what does “peer-review” mean? Essentially, peer-review is an integral part of the scientific process, and what helps separate “opinion” and “fact”. It is what scientists use to make sure that their research is as thorough, accurate, and factual as possible. In general, scientists do not consider something trustworthy or credible unless it has gone through some kind of peer-review process.

How does peer-review work?

A scientist or group of scientists will first go about conducting research. They will ideally do background reading to make sure they understand what is already known about the topic, and where there might be gaps in our knowledge. They will then design an experiment or test, collect data, and analyse that data. The ultimate goal of science is to try and refute a null hypothesis (e.g., all apples are red). We must prove beyond a reasonable doubt that something is different from the null (what has been previously determined) (e.g., some apples are green). If we can’t prove otherwise, and/or the more scientists that run their own tests and come to the same conclusion, the stronger our hypothesis is, or the closer it is to “the truth” (e.g., apples can be different colours).

Once scientists are finished collecting and analysing the data, and have come to a conclusion (e.g., refuted, or failed to refute a hypothesis), they will write a paper reporting their findings. See Sarah’s post on how to write a scientific paper here. The authors then submit the paper to a peer-reviewed journal, usually one that has been selected based on the topic or audience of the journal. The submitted paper is sent to an editor at that journal, who then decides if the paper is appropriate for their journal. If the paper “passes” this first test, the editor will then send it out to at least two experts in that topic.

How are the peer-reviewers selected?

Usually, journals request that authors include anywhere from 2 – 10 names of experts that know enough about the topic to provide sufficiently thorough critiques of the paper. Authors cannot include close colleagues or collaborators, as this could create bias (e.g., your friend is more likely to give you a pass, even if you don’t deserve it). Editors can opt to choose as many or as few people as they want from the authors’ list. Ideally, editors will also find at least one person not on the authors’ list that is an expert on the topic. Authors may also include a list of people they don’t want to review their papers, but they must have a good reason (e.g., “this person doesn’t agree with me” is not an acceptable reason as critical reviews are important to ensure scientific rigor. But, “this person has been openly hostile towards me” would be – some people can be jerks and block good science in peer-review). If you have too many people that you don’t want to review your paper, that sends up red flags to editors, however, so including people on a “no-review” list shouldn’t be taken lightly by authors, and should only be done when absolutely necessary.

The editor then sends the paper to at least two of these reviewers. If the reviewers accept, they then have about 2 – 4 weeks to evaluate the paper. It’s important to note that editors do not usually review the papers themselves (unless they happen to be an expert in that topic), because, especially for larger journals, the editor is unlikely to know enough about the topic to give sufficiently thorough feedback (e.g., a vertebrate palaeontologist won’t review an invertebrate palaeontology paper, and vice versa). 

Peer-reviewing a scientific paper

If you are the reviewer, your job is to go through the paper and evaluate the science independently. Your comments should stick to the science and presentation of the science, and you must refrain from unnecessary criticism of the authors. For example, “this is a poorly written paragraph” is not helpful or appropriate. Instead, you should point out where you didn’t understand what was written, and why. Reviewers typically read the paper over several times to make sure they understand what the authors are trying to test, then evaluate whether the experimental design, methods, and analyses of the data were sufficient to test the hypothesis. Often, reviewers will analyse the data themselves to make sure they find the same things as the authors. Sometimes, if the reviewer feels that the methods or analyses were insufficient, they will suggest that the authors try other analyses that will more accurately test their hypothesis. This is one of the most common types of reviewer feedback. 

If the methods and analyses all hold up to scrutiny, the reviewer will then make sure that the interpretation of the data (included in a paper’s “Discussion” and “Conclusions” sections) matches the results of the analyses. Another common type of feedback from reviewers occurs when authors overstate (or sometimes understate) their conclusions (e.g., the authors may claim their paper proves x, but their results might only be applicable under very specific circumstances). A good reviewer will make sure that all of the claims made by the authors are supported by the tests they perform, and should watch for speculation (speculation may be acceptable, so long as it is clearly stated that it is such).

Reviewers then provide a thorough report back to the editor, including specific comments/suggested edits from throughout the paper. Reviewers will provide a recommendation to the editor indicating whether they think the paper is in need of revisions (“major” or “minor” revisions), or if the paper should be rejected or accepted. Major and minor revisions are the most common reviewer recommendations – major often means further analyses are needed before the hypotheses have been sufficiently tested, minor usually means that the methods and results are sound, but the authors need to tweak a few paragraphs, interpretations, or graphs throughout the paper. Papers that are considered “accepted” are exceptionally well done, and the reviewer may only have small comments that need to be addressed, or possibly none. Papers that reviewers “reject” usually have insufficient evidence to accurately test the hypotheses proposed, may have critically flawed methods or analyses, or would require very extensive revisions that would take a long time to complete, or would end up testing a different hypothesis. Rejections do not always mean that the authors should abandon the paper – it could just mean that there is more work to do before the paper can be fully evaluated. Some journals even have a “reject and resubmit” option, which means that the paper is rejected for now, but that the authors are welcome to resubmit in the future if they are able to address the reviewer’s concerns. It is sort of like “very major revisions” and gives the authors a bit more time/flexibility to complete the revisions.

Revisions

Once the editor has received reviews back from all of the reviewers, they will go through all of them to see if the reviewers have picked out common flaws in the paper, and to make sure the reviews were sufficient. If the reviewers clearly disagreed on something, the editor will often send out the paper to at least one other reviewer for another opinion (this is helpful if a reviewer was unnecessarily harsh or lax). Based on all of the reviewer evaluations, the editor will provide the final recommendation for the paper (accept, reject, reject and resubmit, major/minor revisions). The editor then sends their recommendation and summary of the reviews, along with all of the reviewer comments, back to the authors. 

The authors must then revise the paper based on the reviewer feedback, and address every single comment made by the reviewers. It is the job of the authors to not be defensive about the comments (which can be hard when someone is criticizing your work), but it is important to remember that the reviewer’s job is to make your science better. Depending on the amount of revisions requested (major or minor), the authors are usually given at least 2 weeks (and sometimes several months) to provide their revisions, as addressing every single comment thoroughly takes time. The authors then resubmit their revised paper, as well as a list of their responses to all of the reviewer comments and the actions taken to address each comment. The editor uses both documents to determine if the authors have done due diligence with the reviewer’s feedback, or if further revisions are needed.

If necessary, the editor will send the paper back to the original reviewers or to new reviewers. The process will repeat until the paper becomes acceptable to reviewers, or the paper is rejected. Once the paper is considered acceptable by the reviewers and editor, the peer-review process is complete and the paper is ready to be formatted and published in the journal! It can take anywhere from weeks to years for a paper to become accepted! 

Responsibilities of reviewers

It is important to note that neither authors, nor reviewers are paid (editors at larger journals are sometimes paid positions). Instead, peer-reviewing is considered an “academic service” and authors should expect to review 1 – 2 papers for every paper they publish (i.e., for each review of your paper, you should return the favour by reviewing that many papers). While some people have strong opinions on monetary compensation for reviewers and editors, the current justification is that reviewing is a service and the lack of compensation should keep reviewers impartial. The peer-review process is a lot of work for everyone involved, but is the best way to ensure we have a system that produces sound, thorough, and accurate science.

Peer-review doesn’t just happen in journals, either. Scientific books, text books, theses, and government reports may also be considered peer-reviewed, as they are usually thoroughly reviewed by several experts, or scientific review panels. But the most common form, and the most acceptable form of citations or sources, are peer-reviewed journal articles. Peer-review also occurs for articles in other fields in academia, such as history and the arts.

So, how do you tell if what you’re reading (or what you’ve heard) is credible?

Has it been peer-reviewed?

Is the information coming from a reputable peer-reviewed journal?

Do they cite their sources when stating information/presenting facts?

Even if the information isn’t presented in a journal (e.g., a governmental report, book, or blog post), do they use citations to support their arguments? Are these sources credible (i.e., from peer-review sources, not some random internet link)?

Do the majority of scientists/experts in the topic agree with this opinion?

If you come across a “fact” that a scientist has stated, remember that not all “opinions” are created equal. If the majority of experts have come to a conclusion, yet one person disagrees, that person has most likely failed to properly refute a hypothesis (their conclusions do not match the majority of the evidence). This usually happens when a scientist fails to include all of the appropriate variables in their methods, meaning that the test they used to refute the hypothesis was flawed, even if their work has been published. For example, those that claim global warming has happened before and that therefore the global warming we are experiencing today is just natural variation are failing to include an important variable: the rate of change (which is much faster than any past “background” variation). 

Is the author an expert on the subject?

It takes several years to gain expertise in a topic, mostly by reading all of the peer-reviewed papers on that topic (hundreds or even thousands of papers), staying up-to-date on new research, conducting experiments, and going through the peer-review process. Google searches don’t cut it. Even if they are a scientist, if they normally work in a different topic, there is a greater chance that they might be missing something that is common knowledge to experts in that field. For example, I as a palaeontologist am not about to try and write a paper on black holes, even though I think they’re fascinating and have read lots about them. 

An imperfect system

The peer-review system is not infallible. Nowadays, scientists will often “publish” their work online outside or ahead of the peer-review process with things called pre-prints. Pre-prints allow scientists to share their work, especially large datasets, ahead of peer-review so that they can share their work more quickly and potentially get feedback from other researchers. Often, the data included in pre-prints will end up going through the peer-review process, but as the peer-review process can take a long time, pre-prints allow researchers to get their data out there and get feedback faster. While it may not seem as rigorous because it hasn’t gone through the peer-review process, it can actually end up being more transparent because it potentially allows more people to review the research. Essentially, pre-prints still go through “peer-review” in the actual sense of the word, just not necessarily through the traditional channels of journals.

Journal reviewers can also sometimes act inappropriately. For example, reviewers might make unhelpful comments that are not constructive or based on the science, or may even be downright abusive or derogatory – e.g., criticizing the author, not the work, or saying something unnecessarily rude. While these kinds of comments are not permissible in the peer-review process, and it is usually the responsibility of the editor to reject reviews that include inappropriate content, these kinds of things regularly slip through. It is then within the author’s right to ask the editor to step in and find an alternative reviewer or to ignore the comments when making their final decision. These kinds of checks and balances are what help the peer-review process to remain as impartial as possible – comments must be limited to the science and the presentation of material, and cannot include opinions or feelings about the work, even if it disagrees with your own.

Finally, just because something gets published doesn’t mean it’s perfect. There are lots of bad papers out there that slip through the peer-review process. Editors and reviewers are people too. That is why scientists must always evaluate previous work for themselves. It is an inherent part of the scientific process – trying to independently reject that null hypothesis to see if you come to the same conclusion.

Building a Character Matrix

Jen here – 

Interested in understanding how we take morphological data from extinct animals and use them to infer an evolutionary history? These trees can be used as a framework to test different macroevolutionary questions regarding species distribution, paleoecology, rates of change, and so much more! We hope to set the stage to explain how each step is done! First things first, constructing a character matrix. 

Before really diving into anything specific, I would suggest you think a little about evolution, phylogeny, and all the basic terminology that goes into this field. I would recommend that you work through The Compleat Cladist: A Primer of Phylogenetic Procedures. This is effectively a workbook that walks you through terms, concepts, and more!

This isn’t meant to be an exhaustive guide but rather set you up to explore the program and generate a test character matrix!

Step 1: Learn about your study group

This will involve a LOT of reading and diving into the history of the animals you are interested in. In some instances this is easy, in others it is very difficult! I won’t dwell on this too much but it’s easy to forget where to begin. I would start by using Google Scholar to research your group of interest plus evolution, morphology, phylogeny. Then you will probably have to head to the library armed with a list of literature that is much older than you to really begin your deep dive. Remember that ideas change through time, so starting at the beginning is really valuable to learn how ideas have changed!

What is important is that you also learn about homology and work to understand the homologous elements of your critters. Homology is simply similarity due to inheritance from a common ancestor. The understanding and evaluation of homology may be different depending on the group you are looking at. For example, echinoderms have been considered this way for a while now and there are several schemes. One takes into account the body as a whole and how the elements are connected, the other takes a more specific approach looking at specific plates around the mouth. These are not mutually exclusive schemes but can be used in concert with one another. Another good thing to remember is that some people like to think they are more correct than others – who’s to say, really. Just make sure you do your own homework to form your own opinions and ideas. 

Step 2: Organize your information

There are several ways to do this, you could simply store information in Excel or Google Sheets or you could use a program designed for curating character data. I have used Mesquite for this. Mesquite is freely available software that is 

“…modular, extendible software for evolutionary biology, designed to help biologists organize and analyze comparative data about organisms. Its emphasis is on phylogenetic analysis, but some of its modules concern population genetics, while others do non-phylogenetic multivariate analysis. Because it is modular, the analyses available depend on the modules installed.”

You can easily describe your characters, add new taxa, remove taxa, import or draw a tree and see how characters change across different tree topologies. 

Here is the barebones starting place. I set up a new file and said I wanted three taxa and three characters. Now I can go in and start editing things!

 

There is a side tool bar where you can easily start to modify the matrix. So you can change the taxon names, add taxa, change characters, add characters, delete whatever you want, and a lot more that I haven’t really messed around with! I suggest that if you are a first time user, you spend some time with your fake matrix messing around. Once you get a sizable dataset in here, it’s best you don’t make any mistakes! Figure out where you may go awry and troubleshoot ahead of time.

Here is my edited matrix where I’ve added in three taxa and three characters. Notice at the bottom where it shows a character and the different states that are available. So when you edit the matrix you can use numbers or the character state – numbers are easier!

 

An easier way to import your characters and the different states is to use the State Names Editor Window.  This shows you the list of your characters and all the different states it can have – you can easily edit these and it’s a nice way to organize the characters since in the character matrix the text is slanted and kind of hard to read.

Character matrix with the character list on the far left column and the states spanning the rest. The states can be whatever you want – which is where bias can slip in so don’t forget to refer back to your knowledge base and understanding of homology.

 

The functionality of Mesquite extends quite beyond this. If you are looking for tutorials or to push the limits of the program here is some further reading:

Step 3: Export your matrix for analysis

Extensive export options via Mesquite!

File > Export will give you a series of options to export your file, don’t forget to also regular SAVE your file so that you can revisit your matrix to easily add to it! Most programs that infer phylogenies require a NEXUS file. This type of file has your matrix and often a bit more information about what you want in the analysis or information about the characters. I would suggest using your favorite plain text editor and exporting a few different types so you can see how they are structured and why certain programs may want different files and different information!

 

Counting Deep Sea Sediments

Adriane here–

As paleontologists and paleoceanographers, sometimes the analyses we do involve complex equations, time-consuming geochemistry, or large amounts of computational time running models. But every now and then, we gather data using a method that is simple and fast. Today, I want to talk about one such method that I use quite often in my research. These data are called biogenic counts.

In previous posts, I’ve written about the deep-sea sediments I use in my research, such as sampling the cores we drilled from the Tasman Sea, and processing these samples once they are back in the lab. Each sample, which is stored in a small vial and represents 2 cm of the core (or 10 cubic cm of material), contain pieces of hard parts of plankton and animals, as well as minerals. These minerals and biogenic pieces, then, can tell us about our oceans and the life it held millions of years ago.

Biogenic count data is just that: I dump the sediment samples onto a tray and count the number of ‘things’ that are in that sample to determine the percentage of each ‘thing’ there. ‘Things’ in the sediment fall into a couple different categories: benthic foraminifera (foraminifera that live on the bottom of the seafloor), planktic foraminifera (foraminifera that float in the upper part of the water column in the open ocean), echinoderm spines (the hard parts of things like star fish and sea urchins), foraminifera fragments (pieces of foraminifera shell that are broken), sponge spicules (the hard parts of sponges that look like spiked glass), and I also make note of any minerals that are found in the sample. In one day, I do about 10 samples, which doesn’t seem like much but adds up everyday!

Below I’ll go  over the exact steps I take when performing biogenic counts:

A) An image of one of my jarred samples. B) The microsplitter used to split samples. Notice that the sample being poured in is split between the two cups on either side.

First, I take the jarred sediment and split the sample using a micro-splitter. A micro-splitter is a tiny contraption that equally ‘splits’ the sediment into two holders. Because each sample contains tens, maybe even hundreds of thousands of particles, there’s no way we could count all of that! So instead, splitting the sample down to a reasonable number of particles allows us to more accurately and quickly count the number of particles in each sample, which we can then use to get a percent of each ‘thing’ (e.g., benthic foraminifera, fragment, echinoderm piece) in each sample.

Generally, I try to split the sample until about 300 particles remain in one of the cups. This can take splitting the sample anywhere between 3-9 times, depending on how much sediment is in each sample to begin with. Once I have the ~300 particles, I then sprinkle them evenly onto a picking tray (a metal tray with a grid on it). I then count the number of each ‘thing’ on the picking tray. I keep count of each ‘thing’ using a counter, which makes the process very fast and easy!

An image of my picking tray with the sample sprinkled on it. Some of the major components, or ‘things’, in the sediment are labeled. Most of them are planktic foraminifera, which can be very small or larger. There are a few benthic foraminifera, several fragments, and only one piece of an echinoderm spine. Generally, planktic foraminifera are most common in these samples.

Once I have this information, I then put them into a spreadsheet to plot the data. One thing I haven’t mentioned yet is, why we do this and gather the biogenic count data. It’s actually very useful! We can use the percentages of each ‘thing’ in the sediment to calculate the ratio of planktic to benthic foraminifera. This tells us something about dissolution, or if the bottom waters were corrosive and dissolved the fossils, as benthic foraminifera are a bit more resistant to this corrosion than planktic foraminifera. I also calculate the planktic fragmentation index, which is another ratio which also indicates dissolution (the more dissolved a foraminifera is, the easier it is to fragment).

Thus, the biogenic count data is a quick but extremely useful method to determine the percent of each ‘thing’ in a sample, which can be used to infer something about the corrosive nature of bottom waters, which in turn can tell us something about ocean circulation from millions of years ago!

 

 

 

Curating a Personal Fossil Collection

Cam here –

Cretaceous Fossils from Mississippi (Part 1)

Fossil collecting can be fun and a rewarding experience. It helps us get a perspective of how rich and diverse the fossil record is. Some of us make personal collections of the fossils we find. Collections typically start with fossils and other rocks mixed together with little to no record of where the specimens you collected came from. My way of collecting fossils has changed over the years as simply piling rocks on my bed headrest to buying drawers and cabinets to store the specimens and keeping a record of them by creating a log book and keeping label cards with every specimen in each drawer. There are many different ways to curate your collection. At the end of the day, it is all up to you.

Fossil Collections (Part 3), (Echinodermata, Blastoidea), (Row 2)

When creating a collection or collecting fossils, you want to make sure you know exactly where that fossil came from. Location is probably more valuable than the fossil itself. You can’t always rely on your memory. What I have done is printed out labels and write information down with a black ink pen. There are about 30 labels on each sheet so I have a good amount. I write additional information on the back such as the date, coordinates (if accessible) and more recently the drawer name in which that specimen is stored. It is OK not to have information about your specimen. You can always leave the location section with a question mark or “Unavailable”. Make sure you fill it out the card with information to the best of your abilities.  

Filled in label

Finding things to store your specimens in depends on how delicate and how large the specimens are. Large to small boxes with padding are good things to have. You can find these boxes at hobby shops and arts and crafts stores. Clear jewelry and bead bags are also very useful as well. With all of these boxes and bags combined I keep most specimens in cabinets and drawers. I label each drawer sometimes by location, age, phyla, or by fossil content. It is all up to you. The majority of my drawers are ClearView desk organizer drawers. You can find these at a Walmart in the craft sections and craft stores.

Organizing a collection can be fun but it can also take up space. Make sure you do have room and not stack things too much on top of each other. I have had almost half of my collection collapse on me for doing that. Have fun with it!

Labeled ClearView drawers

Data Management

Jen here – 

I started a job as a Research Museum Collection Manager in September and a large part of it is specimen based. I handle donations, reconcile loans, look for specimens for researchers, organize the collection, and manage other types of data. Now that my job has moved to largely remote I wanted to share some of the things my museum techs and I have been working on to keep our projects moving forward. 

When we think about museums we immediately think of the beautiful displays of mounted dinosaurs and ancient deep sea dioramas that transport you through time. However, there are many research museums that are essentially libraries of life (thanks, Adania for that phrasing). Similar to libraries with books, these institutions hold records of life on Earth and they are massive. At the University of Michigan Museum of Paleontology we have over 2 million invertebrates, 100 thousand vertebrates, and 50 thousand plants. Each of those specimens is tied to other records and data!

Specimen Database

Digital databases allow for the storage of data related to the specimen including location, time period, taxonomy, rock formation, collectors, and much more! Depending on the type of database the structures are slightly different but the overall goal is the same: create an easy way to explore the specimens, see what is on loan, where they are located in the collection, and if they are on display!

Databases, like regular software, get updates over time. The database I’m working in was started ~10 years ago and there have been a lot of updates since then so we are working to upgrade the way the data are organized. For example, now there are different fields that didn’t exist before so we are making sure the data are appropriately entered and then fixing these fields. We are also digitizing our card catalog to verify that the specimen data in the database matches the physical records. We have three card catalogs: Type specimens, Alphabetical taxonomic groups, and Numerical. I spend time scanning in these cards and my museum techs help transcribe and verify the data with our other records. 

Example of a card from the University of Michigan Museum of Paleontology invertebrate card catalog. Many are typed index cards with information on the specimen.

I have quite a few donations that have new specimens that need to be put into the database. To do this, I format the dataset and upload it to the database. Seems straightforward but it takes some time and isn’t the most fun task so I have a stockpile of them to get through while I continue my remote work.

Loan Invoices

One of the tasks we had started before the COVID-19 crisis was to digitize our loan documentation. We have documentation for specimens that we loan out to other institutions, for specimens we bring in to study, and any transfers that may occur. This information had not been digitized so our first step was to scan the paperwork and transcribe key information such as: Who were these specimens loaned to? How many specimens were loaned? Were specimen numbers listed? Where these specimens returned? 

We now have a large spreadsheet which now allows us to search this information rapidly. For example, when we are working in the collection sometimes we find specimens with paperwork or that are out of place. Now we can search the number, see if they were on loan, and make sure we close this loan as being returned. In some cases, we cannot find specimens so I have to reach out to colleagues at other institutions to see if they have a record that the loan was returned. Then it’s up to us to find the specimens in the collection and get them into their proper storage places.

Three-Dimensional Fossils

The last big project we are working on is to get new fossils ready for our online fossil repository: UM Online Repository of Fossils. This involves some on site work at the collection space and lots of post-processing of the fossils. We use a camera to image a fossil from many angles (photogrammetry) and then stitch the photos together to create a three-dimensional fossil. If you are interested in our protocol and set up please check out our website by clicking here. Most of this work has been done by me alone but I am working on ways to incorporate our museum techs into different aspects of the process that can be done at home, such as cleaning the output model and orienting the specimen for final display on the website. Check out our most recent invertebrate addition: Hexagonaria percarinatum.

Example of a species profile on UMORF! Click here to head to the page and explore the viewer.

3D Visualization Undergraduate Internship

Hey everyone! It’s Kailey, an undergraduate student at the sunny University of South Florida.

The image shows a specimen, Gyrodes abyssinus, sitting on a mesh block with a scan via geomagic wrap on the screen in the background.

I wanted to take some time and share with you guys an amazing opportunity I was given earlier this year. As any ambitious college student will tell you, internships are extremely important when it comes to choosing a career path. Not only do they grant students hands-on experience in a particular field, but also general time and knowledge in the workforce. Good internships are hard to come by, which is why I was elated when I got the opportunity to intern at the 3D visualization lab at USF! 

And yes, the lab is as cool as it sounds.

For a place where complex research happens daily, the mission of the lab is rather simple: to harness 3D scanning equipment and data processing softwares. These technological tools have been a wonderful addition to the arts, the humanities, and STEM everywhere, as it has not only supported, but completely transformed, the research in these worlds. This dynamic lab embodies the philosophy of open access research and data sharing, meaning that scientists and researchers from all over the world are able to use its different collections and visit historical sites from the comfort of their homes and offices.

This image shows the Faros arm scanner extended.

My job at the lab was to scan and process some specimens from the department of geosciences’ paleontological collection. The first step in this process is to use a laser scanner and scan my object in various positions (figure 1) using the FaroArm scanner (figure 2). This bad boy has three different joints, making the scanner move around any object seamlessly. The FaroArm also has a probe with a laser, which is essentially taking a bunch of pictures of the object and overlays them. An important note is that these “various positions” need to be easily and manually connected in a software called Geomagic Wrap; therefore, every scan must seamlessly match up like a puzzle! This was probably the most difficult thing to learn, as you not only must think more spatially, but pay close attention to the small, yet distinguable,details, like contour lines and topography (figure 3). In some cases, these small details mean the most to research scientists by showing things like predation scarring and growth lines.

This image shows a close-up shot of the contour lines and topography on the 3D model.

Once the scan is connected and we have a 3D model, the file is switched to a different software called Zbrush. This is where the fun and creative aspects come in! Zbrush allows users to fill in any holes that appear in the scan and clean up any overlapping scan data. This happens when the scans aren’t matched up properly in Geomagic. Next, we paint texture onto the model using different pictures of the fossil. Then, voila, you have a bonafide 3D model (figure 4). The model shown in figure 4 is of Gyrodes abyssinus Morton, a mollusc from the Late Cretaceous. 

I completed a total of three data scans and processes, but was cut short due to the coronavirus pandemic. While my time at the lab was short, I learned so much in terms of technical skills and problem solving. However, the most notable thing I learned was just how interdisciplinary science and research operates at the university level. Networking with archeologists, geologists, anthropologists, and so many more opened my eyes to the different fields contributing to the research world. The experiences I gained at the 3D visualization lab will follow me through my entire academic career.

This is an image of the final 3D model of Gyrodes abyssinus with coloration and texture.

You can visit https://www.usf.edu/arts-sciences/labs/access3d/ for information on the 3D lab and visit https://sketchfab.com/access3d/collections/kailey-mccain-collection to view the rest of my collection.

International Ocean Discovery Program Early Career Workshop

Adriane here-

Earlier this year before the world went into lock down, I had the opportunity to participate in an early career researcher (ECR) workshop through the International Ocean Discovery Program (IODP). The workshop was focused on how to write a scientific drilling proposal with colleagues and friends.

The workshop was held at Lamont-Doherty Earth Observatory in Palisades New York, just north of New York City. At Lamont, scientists and staff manage U.S. scientific support services for IODP, the major collaborative program which, among several other things, allows scientists to live and work at sea for two months drilling and studying sediment cores. The workshop was specifically for early career researchers, which is loosely defined as a researcher who has gained their Ph.D. but has not achieved tenure (that critical phase in a professor’s career where they receive a permanent residence at their college or university).

The Gary C. Comer building on Lamont’s campus, where the IODP ECR workshop was held.

This workshop, which first ran a few years back, was conceived between Time Scavengers’ own Dr. Andrew Fraass and his close colleague, Dr. Chris Lowery. They, along with their colleagues, built the workshop and it has run every 2-3 years since its conception. What is so neat about the workshop is that it is also run and organized by other ECRs, with the help of more senior scientists.

The first day of the workshop focused on introducing the attendees to aspects of IODP. These included presentations on the past and future of scientific ocean drilling and the IODP proposal writing process. We also did participant introductions, where we stood up and had 1 minute to talk about ourselves, our research, etc. using only images on one slide. We, the participants, were also broken out into groups later in the day by themes we identified ourselves as (for example, I indicated I was in the Biosphere group because I work with fossil and am interested in evolutionary questions). From these breakout groups, we then identified 5 places in the Pacific Ocean we would like to target for drilling. Later that night, the workshop organizers held a networking reception for us at a nearby building on campus. The networking event was incredibly cool (they fed us dinner, and it was really great food) and useful (I had the opportunity to meet and speak with other ECRs who have similar interests as myself).

My introductory slide. The upper left box contained our image, name, and association; the upper right box contained a research image (I cheated and included two) and our research interests in three words or less, the bottom left box contained our research expertise and any contact information, the bottom right box contained a mediocre skill we have (again I cheated and used this to plug this website).

The second day of the workshop, we arrived and discussed how to obtain data for a drilling proposal. Just to give some insight into what goes into a drilling proposal, this is a 15+ page document in which scientists write out their hypotheses, where they want to drill on the seafloor, preliminary data that says something to support the hypotheses outlined, and what we call site survey data. Site surveys are when scientists take smaller ships out with an apparatus pulled behind the ship. These apparatuses use sonar to map the features of the bottom of the seafloor, but also the properties of the sediment below the seafloor. The changing densities of the different sediments appear as ‘reflectors’, allowing an MRI-like preliminary investigation of the sediments in which the scientists want to drill into. An entire presentation was dedicated to obtaining older site survey data. We also heard presentations about the different drill ships and drilling platforms implemented by IODP. The second part of the day was again spent working in groups. This time, however, we split ourselves into different groups depending on what area of the Pacific Ocean we were interested in working on. I put myself with the group interested in drilling the southeast Pacific, off the southern coast of New Zealand. Here, we began to come up with hypotheses for our proposals and begin to write those down.

Example of a seismic image from a seismic site survey. The very strong, prominent lines in here are called ‘reflectors’. This image shows the location of a proposed drill location, named SATL-56A. From this seismic image, we can interpret that the top layers of ocean sediments are very flat. The seafloor, which is recognized based on its more ‘spotty’ appearance and lack of horizontal lines, is very prominent here (the top of which is indicated by the green reflector line). These images are essential to include in a drilling proposal so everyone has an idea about what might be expected when drilling.

The third and fourth days of the workshop included limited presentations, with more time dedicated to letting the groups work on their proposals. One of the main outcomes of the workshop is to have participants walk away with an idea of how to write a drilling proposal, but also to have the basic groundwork in place for a proposal with a group of people who share similar interests. So ample time was given for the participants to refine their hypotheses, find some preliminary data about their drilling locations from online databases, and build a presentation to present to the entire workshop. On the afternoon of the fourth day, the teams presented their ideas to everyone, including more senior scientists who have submitted drilling proposals in the past and have worked on panels to evaluate others’ drilling proposals.

All in all, this was a great workshop that really allowed for folks to learn more about the IODP program, where and how to find important resources, and how to begin writing these major drilling proposals. These events are particularly important for scientists from marginalized backgrounds and first-generation scientists. For me (a first-generation scientist), making connections with others is sometimes very difficult, as I have terrible imposter syndrome (when you feel like you don’t belong in a community and that you will be found out as an imposter) and am hyper aware that I was raised quite differently than most of my peers. Being in such a setting, with other scientists, forced to work together, is terrifying but also good because I had the opportunity to talk to and work with people I would not normally work with. For example, I had wonderful discussions with microbiologists and professors whose work focuses more on tectonics, people from two research areas which I hardly interacted with previously.