EA - Why defensive writing is bad for community epistemics by Emrik

The Nonlinear Library: EA Forum - Un pódcast de The Nonlinear Fund

Podcast artwork

Categorías:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why defensive writing is bad for community epistemics, published by Emrik on October 8, 2022 on The Effective Altruism Forum. I see this as a big problem that makes communication and learning really inefficient. In a culture where defensive writing is the norm, readers learn to expect that their reputation is greatly at stake if they were to publish anything themselves. I advocate writing primarily based on what you think will help the reader. I claim this is an act of introspection that's harder than what it seems at first. I'm not trying to judge anyone here. I find this hard myself, so I hope to help others notice what I've noticed in myself. TL;DR: A summary is hard, but: You may be doing readers a disservice by not being aware of when you're optimising your writing purely for helping yourself vs at-least-partially helping readers as well. Secondly, readers should learn to interpret charitably, lest they perpetuate an inefficient and harmfwl culture of communication. Definitions Self-centered writing is when you optimise your writing based on how it reflects on your character, instead of on your expectation of what will help your readers. Defensive writing is a subcategory of the above, where you're optimising for making sure no one will end up having a bad impression of you. Judgmental reading is when you optimise your reading for making inferences about the author rather than just trying to learn what you can learn from the content. Naturally, these aren't mutually exclusive, and you can optimise for more than one thing at once. Takeaways A culture of defensive writing and judgmental reading makes communication really inefficient, and makes it especially scary for newcomers to write anything. Actually, it makes it scary for everyone. There's a difference between trying to make your statements safe to defer to (minimsing false-positives), vs not directly optimising for that e.g. because you're just sharing tools that readers can evaluate for themselves (minimising false-negatives). Where appropriate, writers should be upfront about when they're doing what. As an example of this, I'm not optimising this post for being safe to defer to. I take no responsibility for whatever nonsense of mine you end up actually believing. :p You are not being harmed when someone, according to you, uses insufficiently humble language. Downvoting them for it is tantamount to bullying someone for harmless self-expression. What does a good epistemic community look like? In my opinion, an informed approach to this is multidisciplinary and should ideally draw on wisdom from e.g. social epistemology, game theory, metascience, economics of science, graph theory, rationality, several fields in psychology, and can be usefwly supplemented with insights from distributed & parallel computing, evolutionary biology, and more. There have also been relevant discussions on LessWrong over the years. I'm telling you this because the first principle of a good epistemic community is: Community members should judge practices based on whether the judgment, when universalised, will lead to better or worse incentives in the community. And if we're not aware that there even exists a depth of research on what norms a community can try to encourage in order to improve its epistemic health, then we might have insufficient humility and forget to question our judgments. I'm not saying we should be stifled by uncertainty, but I am advocating that we at least think twice about how to encourage positive norms, and not rely too much on cached sensibilities. I'll summarise some of what I think are some of the most basic problems. 1) We judge people much too harshly for what they don't know Remember, this isn't an academic prestige contest, and the only thing that matters is whether we have the knowledge we need in ord...

Visit the podcast's native language site