What is ACX Grants?
I want to give grants to good research and good projects with a minimum of paperwork. Like an NIH grant or something, only a lot less money and prestige.
How is this different from Marginal Revolution's Fast Grants, Nadia Eghbal's Helium Grants, or EA Funds' grant rounds?
Not different at all. It’s total 100% plagiarism of them. I'm doing it anyway because I think it’s a good idea, and I predict there are a lot of good people with good projects in this community who haven't heard about / participated in those, but who will participate when I do it.
How much money are you giving out?
ACX Grants proper will involve $250,000 of my own money, but I’m hoping to supplement with much more of other people’s money, amount to be determined. See the sections on ACX Grants + and ACX Grants ++ below.
Why do you have $250,000 to spend on grants?
Unsolicited gifts from rich patrons, your generosity in subscribing to my Substack, and the second item here.
Also, this isn’t investment advice or anything, but apparently cryptocurrency only ever goes up.
I thought you were a believer in effective altruism, which says you have to donate your money to the most effective charity - which is probably about fighting malaria or existential risk or something. Giving it to random people on your blog isn’t very effective, is it?
Some effective altruist organizations suggest that people with large but not billionaire-level amounts of money might want to try acting as charity “angel investors”. They argue that there are enough government agencies and billionaires to fund the biggest and most obvious legible high-impact opportunities. One of the best things that normal people can do with their donations is take a chance on interesting projects too small and illegible to catch governments’/billionaires’ attention. Some of these might accomplish some circumscribed goal with their seed funding; others might use the seed funding to produce preliminary results that convince governments and billionaires to help them the rest of the way. My access to the ACX community gives me a unique opportunity to do this, so I really do feel like this is the best way to donate this money.
But I’ll also, separately, be giving 10% of my income to more standard effective charities, since I’ve pledged to do this.
What kind of projects are likely to get grants?
Projects that could make the world a better place, but might not be able to catch the attention of more traditional funders. A (very non-exhaustive) list of things I’m looking for would include projects that:
help address global poverty, global health challenges, mental health issues, animal welfare issues, or global climate change.
move forward innovative and potentially socially beneficial technologies and ideas, even if these are very speculative.
help understand and prepare for potentially disruptive future events, like pandemics or the advent of AGI
improve the academic, governmental, and decision-making institutions that work on these other causes
help do basic research, awareness-raising, or meta-level work that could eventually lead to one of the above
I’ll also tentatively be allowing grants for personal or career development, but there will be a high bar in terms of proving that your career would be really great for the world and that you would have a hard time developing your career without this money. I’d be most likely to approve a grant in this category if you were from a developing country that doesn’t have a lot of traditional avenues for building career capital.
These are only suggestions! If you think you have something that could make the world a better place that doesn’t fall in these categories, apply anyway.
I’ll mostly be approaching this from an effective altruist framework, but I’m not wedded to calculating things out exactly (which is impossible anyway), and if you have a great project that doesn’t fit within “traditional” effective altruism, apply anyway.
I’ll be happy to consider your application whether or not you are a traditional academic, whether or not you have a long history of past successful projects, etc.
Because I only have $250,000 and want to make at least a couple of grants, I’m unlikely to fund projects that cost more than about $100,000. If you have a really great project that needs more money than this, you should apply anyway, and I’ll see if I can fund you through ACXG+ or ACXG++
Can a group of people apply as a team?
Yes, of course.
What is ACX Grants + ?
I know a lot of nonprofits and rich people looking for interesting projects to fund. With your permission (ie if you check a box saying so on the application form), then along with considering your project myself, I’ll forward it to any of these people who I think it’s a good match for. Some of these people have a lot of money and are really excited about this, so I would highly recommend opting in (by clicking the checkbox on the application form). This would also be a good option for people who need more than $250,000.
If you’re a nonprofit or rich person interested in participating in ACX Grants + , and I don’t already know about your interest, please send me an email at scott@slatestarcodex.com.
What Is ACX Grants ++ ?
If you opt into this one (also just a check box on the application form) then I’ll include a description of your project, plus your contact info, in a public post on this blog. Anyone who reads about it and wants to fund it can.
What’s the process like and how long will it take?
You’ll fill in a form that should take about fifteen minutes. I will read the form and talk with smart people who seem like they might have good opinions. If you are a leading candidate, I might or might not email you asking for more information, or try to arrange a short call with you. The whole process shouldn’t take more than an hour or two of your time.
I’ll close applications in two weeks, and announce winners between two to four weeks after that.
Will this be taxed?
My current understanding is that I will have to pay a gift tax but winners will not have to pay taxes on the grant money. Please double-check this with your local jurisdiction.
Grant-making foundations have some tax advantages, but I don’t have a grant-making foundation. Some grant-making foundations are helping with this project, but I can’t legally “go through them” in a way where I both give them the money and influence their decisions. We’re still talking to tax experts, but most likely we’ll make decisions separately, then compare notes, and split the funding in some legally-permissible way. None of this should matter to you except that your check might come from me or from a foundation (if you opted-in to ACX+). You shouldn’t have to pay taxes either way.
How do I apply?
Fill in the form here.
This is wild. Excited to see where this goes
This is truly one of my favorite developments coming out of the rationalist community/Progress Studies/EA sphere.
What's the point of all this money sloshing around the economy and crypto if it's not gonna fund moonshots?
I thought about starting a rationalist community DAO, bootstrapping the token value a la any of the tokenomics mechanics printing money in crypto, create a community fund, and vote on projects to disburse tokens to.
My hope is that this year I do a very normal grant round to set a baseline and make sure I can do this at all, and then next year I figure out some kind of crazy innovative idea like that (though probably with less crypto).
>make sure I can do this at all
My preregistered hypothesis is that this becomes one of the most successful and important things you do (as determined by you subjectively).
remind me! 10 years
Do we have remind me bots for substack???
No, tongue in cheek. But my hypothesis is genuine.
Now I’m wondering what it would take to set one up. Seems useful
Apply for a grant to develop one
Given that substack doesn't have an API ... probably a web scraper, a server and a droplet or two? This sounds like a weekend project at the outside for an interested hacker (especially if we only care about setting it up on this substack).
Substack actually seems to have an API, they just don't really publicize it. You can get the comments for this post from https://astralcodexten.substack.com/api/v1/post/40213067/comments?token=&all_comments=true, for example, where the id comes from https://astralcodexten.substack.com/api/v1/posts/apply-for-an-acx-grant
Let me know if you get anywhere on the rationalist DAO idea.
One thing I've been mulling over recently is launching a dog coin which is actually a Trojan horse for distributing funds to EA causes. The amount of money that can accrue to a (successful) dog coin is staggering. The trick is just to figure out how to make it more of a Schelling point than all the other trash floating around in that category. It would probably help that we could get Vitalik and a few other crypto "celebrities" on board - I think a lot of people would be glad to see the stupidity of the current market put to good use for once.
Google rainbow rolls nft. Similar idea
Greg Cochran has suggested that if you replace nitrogen with helium in the air you breathe you might increase cognition and alertness. If this is tested and proved true there would be enormous social benefits as we could put scientists (and AGI safety researchers!) in sealed rooms with such air. I've been trying to get someone to test this idea. As an economist, no same person would let me mess with the air people breathe. But if anyone reading this has the qualifications to run the experiment perhaps ACX grants would be the right place to apply for funding.
I can't deny this is my kind of experiment, but I feel like it would be more kabbalistically appropriate to apply for a Helium Grant https://www.heliumgrant.org/
I believe Helium Grants went on indefinite hiatus?
From the website:
Q: Are you ever going to do Helium Grants again?
A: I don't have any plans to start up again, but you never know.
Nadia has ceased to make Helium grants. She's working on a new book, I think.
It's a nice idea, but my application for a MacArthur Grant to invade the Philippines has been repeatedly declined.
Haha!
This strikes me as remarkably difficult and unproven relative to old fashioned stimulants (ritalin/adderal/modafinil). But I'd love to see it studied anyways!
I’m just imagining these really smart people talking about world changing ideas with high squeaky cartoon voices.
Greg has suggested that if it works speaking with a high squeaky cartoon voice would become a sign of power and authority.
You didn't mention why it's worth looking at: nitrogen narcosis (https://en.wikipedia.org/wiki/Nitrogen_narcosis). Basically, divers that dive with nitrogen find that at higher partial pressures the nitrogen is an intoxicant. Every gas except for helium & neon is an intoxicant to some effect: Xenon at low pressures can knock people out. So if 2 atm of nitrogen pressure makes you tipsy, then what does the 1 atm we're all experiencing all the time do? And if we got rid of it by replacing it with helium, then could we make people knurd?
Heliox wouldn't be that expensive to test out. Professional divers use it, and there's medical uses as well. You can either find some diver's tanks and hook people up, or get some medical grade mixers to mix oxygen & helium from tanks and run a nasal cannula.
The more interesting question is what you'd want to test. Let's suppose we can get, I dunno, 30 people to wear nasal cannulas for 2 hours. I'd do something like this: set up the mixers appropriately so that oxygen is constant @ 21%, but you can swap the remainder smoothly between helium & nitrogen. You set things up so that people are on normal air for 1 hour and heliox for the other, with which is first being random.
As for the tests, maybe some combination of standard Raven's matrix questions and reaction time tests?
Can't seem to find heliox pricing anywhere, but I suspect you could do this for <$100K.
I think you could self experiment for <$100 and notice it if there was any big effect worth noticing. I found a study showing a massive improvement in cognitive function of divers under ~3.6atm of pressure using heliox vs compressed air. (https://europepmc.org/article/med/32176950) but no study at 1atm.
> So if 2 atm of nitrogen pressure makes you tipsy, then what does the 1 atm we're all experiencing all the time do?
It doesn't, most people need 4 atm (or 0.7->2.8 partial) to experience it.
I'll gladly self-experiment with a 79-21 mix of helium and oxygen, but I don't need the money. I'm surprised if divers and astronauts don't already do this, since helium is so much lighter than nitrogen. It's $7.57 per cubic meter, and we breathe 11 cubic meters per day, so even if the helium was not recycled at all, the cost would be only $7.57*11*0.79 = $66/day
Try not to kill yourself, which is quite possible if you're playing with pure helium tanks. Get an oxygen tank too, along with a medical-grade gas mixer. Remember that the asphyxiation response is caused not by lack of oxygen but by the presence of carbon dioxide. If you're breathing a 5% O2 95% He mix, you'll happily pass out and die without ever feeling short of breath.
People on Everest seem to notice the lack of oxygen
Now *this* is the kind of Mad Scientist malarkey I'm here for!
Pros: come up with genius insight into pressing problem
Cons: people too busy laughing at your Donald Duck voice on helium to take you seriously
On the Internet, no-one knows you're a duck.
I think you are a really good person and I think you are extremely charitable for doing this.
But is this a better use of the money than standard effective charity donations? $250,000/~$3200 = ~78 lives not saved. If it is a better use of the money, then wouldn't it make sense to put your 10% into the research grant? After $250,000 does the research have diminishing returns such that it is better to give to the standard effective charities? It might be worth fleshing this out some more if lives are on the line.
Also, it seems like you should sell more NFTs for sure. Why not? It produces carbon which you could offset. But...I wouldn't offset the carbon, I would just save more lives.
I think a tiny, tiny fraction of EA spending on moonshots that have nonzero probability to dramatically increase QALYs, or actually save lives, makes this a really good, in fact underrated, idea.
That's a generic/a priori argument, and I understand the demand for providing those probabilities rigorously when the opportunity cost is lives saved. But that's the thing: venture capitalists, philanthropies, smart people and capitalism generally have all been attempting to predict the impact of moonshot ventures for a long time and it's *really, really hard* to predict what will work and what won't, so at some point one has to make judgment call and say, time for some moonshots.
Good question.
One answer could be that since rationally I should fill up the most expected-effective category before moving on to anything else, and we're talking about categories much too big for any individual to ever fill up, this proves that I'm not donating fully rationally and instead trying to satisfy psychological needs (spending some money to feel risk-seeking and innovative, then other money to feel certain that I'm doing at least some good). I can't argue with this, but I think there are compelling arguments for either being the higher-utility one, and although they're probably not *exactly* equally compelling, it's enough that the value of satisfying my psychological needs is higher (to me) than the expected value gain of doing whichever one turns out to be better.
Another answer is that since I'm not especially rich but I am especially a public figure, the value of almost everything I do is as a role model. I'm role-modeling pledging 10% of my money to effective charities (which I think more people should do), and I'm also exploiting my public-figure status to get proposals for exciting new ideas that richer people than I am can fund. These don't trade off against each other the same way that money does.
(maybe you could counterargue that I should role model being rational and not putting my psychological needs above relatively-small-but-absolutely-large gains in expected utility, but I'm not sure people would listen to that message. I wouldn't!)
To sell NFTs, I would actively have to ask for them or advertise them, and I think the reputational damage this would do is more costly than the money I would get. But if anyone wants to unsolicitedly buy an NFT from me, sure, whatever, send me an email as long as you're willing to pay enough for it to be worth my time (I am technically unsophisticated, you would have to walk me through the process, and it would take a while).
Yet another answer is that since you are already (separately from this, if I understand correctly) donating 10% to charity, this doesn't actually funge against saving lives but against your metaphorical beer fund, and therefore you are completely off the hook for that particular question as per one of your old SSC posts (this one, I think: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/).
"maybe you could counterargue that I should role model being rational and not putting my psychological needs above relatively-small-but-absolutely-large gains in expected utility, but I'm not sure people would listen to that message. I wouldn't!"
You should listen to that message tho cuz it is the correct one in this situation.
Your argument isn't very convincing. I think the problem comes from conflating "rational" -> maximizes utility, and "rational" -> uncompromised by human emotions. Both senses of rational are ideal rather than rational, but even in the mixed and muddied real world the two senses output very different results. It seems like Scott is trying to be rational in the sense of maximizing his utility by meeting his emotional needs, while you are trying to be rational by not letting emotional needs get in the way of clear numerical calculations. Anecdotally, I cannot recommend utilitarians try to discount/ignore their emotional needs. It works no better than trying to discount/ignore physical needs.
(Of course if saving as many lives as you can is your big emotional need, then go right ahead!)
Honestly, I think this idea is possibly the maximal EA for Scott, regardless of whether it also happens to serve his emotional needs.
Ord's thesis in The Precipice is, after all, that if you value humanity's future at some non-hugely-discounted rate then X-risk swamps everything. The chance of something coming out of here that substantially ameliorates an X-risk is low, but given that X-risk charities are mostly doing similar things to this anyway and ACX has more penetration than most of them it's probably the biggest splash in that pool Scott can make.
> rationally I should fill up the most expected-effective category before moving on to anything else
This doesn't sound right to me.
It's not true, any more than a rational investor seeking to maximise his returns should throw all his money into whatever he thinks will have the highest return. That's a silly way to invest; instead a rational investor should acknowledge his own ignorance about which investments will pay off, and have a portfolio of different investments in different areas with different levels of risk.
(Project idea: apply Modern Portfolio Theory to Effective Altruism projects.)
Also, consider comparative advantage. Any semi-rich jerk can give $250K to random charity, but Scott is uniquely well placed to run this kind of project, because he has a lot of smart readers and all that.
If someone else (e.g. me) tried to run the same sort of grant project it would go much worse, because I'd probably have to resort to advertising on lamp posts in my local area or something and I'd attract a much lower quality of applicants.
I think there is a big difference between investing and charitable giving that makes this analogy not as applicable.
When doing personal investing, you are trying to maximize your expected utility, but the way you operationalize this is by trying to make money. If your goal was solely to maximize your expected amount of money, you really would choose the one stock with the highest expected return and ride it until some other stock began to look more promising. But since your goal is actually to maximize utility, and since utility is a sub-linear (perhaps logarithmic) function of money, you act more safe and diversify.
When charitable giving (as an effective altruist at least), you are trying to maximize your expected positive impact on society, and the way you operationalize this is by finding the interventions with the highest expected positive impact. There's no disconnect here like there is between wealth and utility in investing. And the way to maximize your expected positive impact on society is to find the most promising cause, and donate all your money to it (that is, unless you donate enough money for there to start being diminishing returns, which is not a problem for most smaller-scale donators).
I say all this as someone who does diversify his charitable giving somewhat, but I'm not sure if I'm doing the right thing.
I think you're confusing "rational" with "return maximizing". The point of diversification is hedging against losses. A return-maximizing investor should sink all their money in the offer with the highest return, possibly lose it all and say "Shrug, it was the correct strategy anyway."
If you apply the reasoning of "I should fill up the most expected-effective category before moving on to anything else" to an investment portfolio, you would end up with a diversified portfolio. You first $1 goes into, say, a stock index fund. The expected utility of an additional dollar into stocks isn't as high as the expected utility in bonds, because adding marginal risk reduction increases expected utility more than adding marginal expected return. So your next $1 goes into bonds.
It's debatable when and to what extent charitable projects experience diminishing marginal utility. But to the extent that they don't, you should put all of your dollars into whatever category has the highest expected utility.
The Kelly Criterion says that your bankroll growth rate is maximized when your bet sizing maximizes E(log(bankroll). If you go all in on one investment, there is a chance your bankroll goes to zero, and the log of zero is negative infinity, so E(log(bankroll)) is not maximized.
In charitable giving, you just try to maximize E(utility) instead of E(log(utility)). There are enough other people doing charity work that THEY provide the diversification. So if an omnipotent being offers me a coin flip, wherein if I lost the flip I get nothing, and if I won the flip I'd get 10^69420 QALYs for each dollar I wagered, I am definitely going all in.
Forgive myth mathematical illiteracy, but what difference does the log operator make here?
A logarithmic utility function makes you more risk-averse. If there's a coin flip that doubles/halves your bankroll, the linear utility maximizer would always do it because E(bankroll) is 1.25*pre_bankroll. But the logarithmic utility maximizer would be indifferent.
Logarithmic utility is appropriate whenever your future income expectation is directly proportional to your future bankroll, which is a halfway decent approximation both investing and professional gambling. But it ignores living expenses (which should make you more risk averse) and it ignores diminishing returns where larger investments result in smaller rates of return (which should make you more risk-tolerant, and which are pervasive in both investing and professional gambling). Most professional gamblers bet half-kelly or less but I'm on the more risk-tolerant end of the spectrum.
An investor is trying to maximize his own risk-return tradeoff, thus his personal investments need to be diversified. But an Efficient Altruist isn't trying to maximize the risk-return tradeoff for the projects he funds. Since the benefit accrues to the world, the proper portfolio is all of charitable giving, or something like that. Since the amount of money Scott is giving is small, relatively speaking, then the "rational" thing to is almost always to put it in one place, to get the world portfolio closer to optimal.
This argument gets weaker if you're talking about Bill Gates-level donations, where you might actually fill up a bucket to it's optimal position.
diversification is called the only free lunch.
Probably progress probably trumps direct charity in the long run... although I'm sure EA has a good response to this
(1) As Roko pointed out on Twitter (https://twitter.com/RokoMijic/status/1459121955819425796), as regards carbon emissions, it isn't very consistent to own Bitcoin/Ethereum or any other Proof of Work (PoW) cryptocurrency, while anguishing about NFTs in particular. (Assuming you do, I don't know). Especially since these NFTs would presumably reside on Ethereum, which is moving towards non-polluting Proof of Stake next year, so if anything driving up activity on it increases the chances that it flips Bitcoin sooner (this will be good from a climate perspective since $BTC has no plans to move away from PoW).
(2) Or you could, even today, sell your NFTs on a non-PoW chain such as Solana, which is probably the chain that has the best chances of flipping $ETH in turn. NFT market on Solana: https://solanart.io/
(3) You could also buy $KLIMA (https://www.klimadao.finance/) which buys up carbon credits, in amounts commensurate to whatever you think the carbon impact of any particular NFT you mint and sell is.
(4) Last but not least, customary reminder that moderate climate change towards a warmer world is almost certain a net good, but I suppose that's a bit OT here. In any case, the US military alone emits far more carbon than all cryptocurrency combined, and is also probably a net negative for global welfare whereas crypto is a massive net positive. From this perspective, wouldn't it be more moral (if admittedly also more dangerous) to try to maximize tax evasion?
> Last but not least, customary reminder that moderate climate change towards a warmer world is almost certain a net good
I tried to track down this claim, and I found: an article by Matt Ridley, which cites a paper by Richard Tol; multiple corrections to the Tol paper due to data entry errors, which Tol says do not change the conclusion; and two posts by Andrew Gelman claiming that the paper and correction have additional errors that throw this all into doubt.
Is that indeed the source for your claim? If so, the information sounds woefully inadequate (like the question is under-researched, or has been summarily ignored due to apparent bogosity, or both).
It hardly needs to be said that this is a fringe belief, and unsurprising that Matt Ridley and Richard Tol are both accused of being climate science deniers (etc.), but whether that is a cause or an effect is unclear to me.
(Matt Ridley is a journalist, libertarian, and viscount. Richard Tol is an economics professor. Andrew Gelman is a statistics and polisci professor.)
I commented in greater depth on the old AGW specific thread. But TLDR: Many of the risks are overstated. Coastal megapolises are sinking 10x+ faster due to groundwater depletion than sea level rise. Warmer world = wetter world (cold produces drought, which is the real killer of civilizations historically), with a greater fertilization effect.
No, it's not based on any of those people, but paleoclimate evidence (as in, things that actually happened, as opposed to speculative models trying to incorporate many kinds of phenomena that are very poorly understood even in isolation, let alone as part of a complex system), e.g. Sahara being a verdant garden hosting elephants and hippos when the world was 2C warmer. Deep Future by Curt Stager is a good introduction.
My biggest problem with entirely rational behavior is that it seems so, well… joyless.
Mr Spock was only fun because we could laugh at him. Besides, that hot head Jim Kirk usually had a better idea. [mostly tongue in cheek]
So the point is, I guess that I’m certainly not going to find fault with an attempt to do good with an approach that might possibly be construed as non optimal.
https://www.lesswrong.com/tag/hollywood-rationality
So you guys practice a joyful rationality? I’m honestly a bit confused
It seems pretty Spock like at times.
I mean if you get down to assigning a numeric value to each action, isn’t that kind of like Spock?
Okay. I’ll read the LW links
I think I can compound my wealth at 15% a year, while the cost of utilons probably only goes up as fast as 3% expected inflation + 2% global real gdp per capita growth*. So for each year I delay, I can buy 10% more utilons. I'm 36 and have nearly US$5M, so if I delay till I'm 86 that's 1.1^50=117x more utilons. One might counterargue that a utilon provided today can compound itself and provide more utilons in the future, but that's less clear than financial compounding and it's going to heavily depend on the type of charity. Also knowledge about how to spend the money will be better in the future. The plan is to wait until I'm at least 80, then give away 10% of my net worth per year.
* (source: worldbank https://data.worldbank.org/indicator/NY.GDP.PCAP.CD
From 2010 to 2020, global GDP per capita in current US$ increased from 9558 to only 10909. That's an annualized growth of 1.3%/year. If we blame covid and cherrypick 2019 instead it's still only 1.97% from 2010-2019. Seems like there's a decent chance it will go negative in the future due to the lack of a demographic transition in some poor countries, but I'm not confident enough about it to factor that in to the model. There should be prediction markets for global GDP per capita in 2100 conditional on adopting policy X.)
(Have the EA people done empirical research on the utilon inflation rate and the utilon-compounding rate for various kinds of charity?)
I think Scott's old post is tangentially relevant: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/
I'd suggest incorporating a grant making organization to get around the taxes.
Probably I should do this before next time, but I understand it's pretty hard and might cost more than I save.
You just incorporate and then register with the IRS. It costs a few hundred dollars usually. You would be able to deduct what you give out from your taxes and to get out of gift tax in many cases. Presuming you don't hit the percentage of your income limit and the average grant is $25k you'd be saving about $90,000.
So I am not a tax expert and this is not tax advice and all that, but my understanding is that you won't have to pay gift taxes anyway, unless you end up making *way* more gifts than we're currently talking about - google "lifetime gift limit" and the first result's precis says "Most taxpayers won't ever pay gift tax because the IRS allows you to gift up to $11.7 million over your lifetime without having to pay gift tax."
That's what I thought, too. There's also a $15,000 per year exemption before the lifetime exemption kicks in, which I think is per recipient as well. So you should be able to give you $250k grant pool as 17+ grants of $15k or less without cutting into your lifetime exemption.
>There's also a $15,000 per year exemption before the lifetime exemption kicks in
That's right. Here's what the IRS says:
>How many annual exclusions are available?
>The annual exclusion applies to gifts to each donee. In other words, if you give each of your children $11,000 in 2002-2005, $12,000 in 2006-2008, $13,000 in 2009-2012 and $14,000 on or after January 1, 2013, the annual exclusion applies to each gift. The annual exclusion for 2014, 2015, 2016 and 2017 is $14,000. For 2018, 2019, 2020 and 2021, the annual exclusion is $15,000.
https://www.irs.gov/businesses/small-businesses-self-employed/frequently-asked-questions-on-gift-taxes
I *think* the relevant form was the 709, and that other than that you just need to sign some letter affirming that it is truly a gift, i.e. you don't expect anything in return? (this latter might have been to give to the recipient, so that they can prove on *their* taxes that this was a gift and not income). But again this is not my field so double check.
Taxes on this sort of thing are not inherently bad.
What's inherent badness got to do with this? It's just a fund allocation optimization thing.
The government set up rules specifically to give things like this a tax exemption. I'm not sure why paying taxes the government doesn't want you to is a moral issue.
A donor-advised fund is less onerous: https://www.nptrust.org/what-is-a-donor-advised-fund/
I don't think a DAF would work here - aren't they only allowed to make grants to officially approved charities? whereas Scott wants to make grants to random individuals.
Right, these are not charitable donations, in the IRS sense.
This is a fantastic idea. I think you're uniquely suited to do this well given your network and general ability to cut through the BS that larger grant organization seem paralyzed by.
Is there a deadline for applications?
FAQ above says applications are closed in 2 weeks.
Might be good to put a date on that, just for clarity. Especially for people who don't see this post on its first day.
Scott, this is wonderful. Apart from the money, I think there's huge value in the credibility that would be imparted on a winning project, not to speak of the potential network benefits of ACX+.
It's hard to convince the rare talent needed for very ambitious projects to take the plunge, and money is often not the bottleneck. I think the credibility and publicity of something like this would really help a project like that. Would you consider proposals which don't necessarily need the money, but are looking to form strong teams and benefit more from the network and credibility effects?
Thanks for the kind words. I don't have any plans to create a network; maybe I should figure that out but I can't guarantee it. Does anyone know how this is usually done?
I suppose if you wanted to game the system you could apply for a very small grant to improve some aspect of your operations, and check the box to be included in ACX++, and then I would have to advertise you!
The example I had in my mind was to do with startup funding. Often, if one can personally fund an endeavor, it is still ill-advised. Not only does raising money from reputable source impart credibility on a venture (you have to convince them!), but it also creates new stakeholders invested in its success. If these stakeholders are well connected and themselves credible and respected, the effect compounds, opening new doors for recruitment, expansion, and hopefully success.
I took your description of ACX+ to be that of a credible source of funds that would be more (or just as) valuable for its credibility-by-association and reputability than for the funds itself, especially for large ambitious projects that would need many funds.
I would assume Advertising to the broader ACX community (through ACX++) would have a similar effect, although having observed (and taken part in!) this community's love of argument and debate, I would hate to have a project's thesis be torn to shreds in public. It could have quite the opposite effect than desired.
I want to signal boost this - particularly for start ups or individuals/small groups operating outside of accelerators/universities, funders play a much bigger role than just money - they can provide connections, sounding boards, boring-but-essential admin tips, etc.
The network already exists I'm typing stuff into part of it right now. If the ACX-funded subnetwork wants to start their own [whatever is recommended as an alternative to
a discord server] then they can. And i'd not expect this endeavour to gain the institutional-nature overnight.
Well done, a very worthy endeavor.
"Some effective altruist organizations suggest that people with large but not billionaire-level amounts of money might want to try acting as charity “angel investors”. They argue that there are enough government agencies and billionaires to fund the biggest and most obvious legible high-impact opportunities."
I have often felt the urge to write up my belief that marginal utility declines faster for charitable giving than for other types of expenditures, but then I feel tired and lie down.
Can I bribe you to write it up briefly? $50 to your paypal or whatever, or I'll sub to your substack for at least a year at the annual rate.
But is it a worthy endeavor from a Marxist perspective?
Would you say that this is still true for the particular top charities recommended by (say) GiveWell, who already try to account for decreasing marginal utility in their evaluations via their "room for more funding" consideration?
I think it's a fair question because "normal EAs" (like me, honestly) mostly just default to directing monthly donations to whatever orgs like GW recommend (maximally lazily, I just set up a recurring auto-payment to their Maximum Impact Fund), instead of trying to pick charities ourselves (which I agree would, on average, be affected by decreasing marginal utility).
Can multiple people apply as a team for the grant? I think I have a few grant-worthy ideas but no time to work on them. My main project (in vitro gametogenesis) is already well-funded.
You're doing in vitro gametogenesis? I...know a lot of people who would be willing to throw basically unlimited money at speeding that up a small amount for...uh...reasons...so let me know if there's anything do-able in this space.
But yes, of course you can apply as a team.
Interesting, I'll email you to connect about the IVG stuff.
Re this thread from before:
https://astralcodexten.substack.com/p/model-city-monday-11821/comment/3555263
It’s not much, but if you submit a grant proposal that Scott accepts, and it has to do with anything related to what I brought up in the linked comment chain , or any research that could either validate or falsify practical predictions of various Georgist policies, I’ll contribute $1,000 of my book review winnings to your grant.
As well as my attention, personal network of wonks, and anything else I can do for you.
Set up a MAOI antidepressant manufacturing startup? :)
I haven't done a thorough cost estimation, but I think that using this treatment (https://www.nature.com/articles/s41467-019-10366-y) against certain transposons (https://www.lesswrong.com/posts/ui6mDLdqXkaXiDMJ5/core-pathways-of-aging) has a good shot at _reversing_ aging.
Where, if at all, would this treatment be within this list of most promising anti-aging strategies? https://www.lesswrong.com/posts/RcifQCKkRc9XTjxC2/anti-aging-state-of-the-art#Part_V__Most_promising_anti_aging_strategies_
Assuming it works, it would take first place, as it would address the underlying cause of aging, as opposed to its symptoms.
The LASER ART acronym really makes me want to punch the person responsible.
Awesome news. So much of basic work on third rail issues that have great potential to advance our understanding of the human condition are impossible in today's ideological climate.
You love to see this kind of stuff. Excited to see where this goes!
This is fantastic.
I've seen this sort of thing done at a lower level - people trying to do things who need money for it, and people gathering to review and fund things. I love this sort of model.
Good luck!
Surprised you have money to burn, given that you had to quit your daily job due to mob.
There was never any mob and Siskind was just complaining that the New York Times had the temerity to print his name in an article.
Hey, marxbro, what's *your* real name? I mean, if we're talking about having the temerity to print one's name where anyone can read it, then you must mean there are no bad consequences, so why aren't you all letting us know who you really are?
My real name is John Smith. People attempt to claim that NYT published Scott's name without his permission or something like that, but Scott had already put his real name out there himself. He's just angry that the New York Times is using its freedom of speech to post something minimally critical of him.
Punchline: Scott is at present one of the most individually successful writing journalists in the world, surely top 100. (And more power to him!)
Substack is doing a *very* effective job monetizing the niche of high-profile individual writers, and per the current categorization Scott's the #1 writer in Science. See his comments here, and there are plenty of other analyses around the web that go a long way towards explaining why Substack can afford to headhunt so much high-profile talent: https://astralcodexten.substack.com/p/adding-my-data-point-to-the-discussion
"Guy whose main source of income is blogging, which lets him pursue his passion of providing low-cost psychiatric care for uninsured patients" is a pretty heckin remarkable Type of Guy and I'm glad he exists.
His substack says tens of thousands of subscribers. Assuming most are $100/year -- that's more than psychiatrist salary on just subscriptions alone.
Maybe you could provide a few examples of the kinds of projects you'd be especially excited about?
Once you sort out the most tax-efficient way to do this, it would be fantastic if you could share — I’ve wondered how to do this, too, and (speaking as a lawyer who sometimes dips shallowly into tax law) I’m not sure any of the explanations in the comments so far are totally accurate
Couple of thousand bucks to buy a chunk of hafnium, with ~27% Hf178, and a dental X-ray machine to generate the nuclear isomer and trigger gamma-ray release.
Stimulated emission of Hf178m2 never replicated, and was just as implausible as cold fusion.
https://en.wikipedia.org/wiki/Hafnium_controversy
Muon-catalyzed fusion works fine, it's just not net-energy positive with current muon sources.
I mean the fake kind of cold fusion that didn't use muons and never replicated.
I'm a huge fan, Scott, but I sure hate it when people say they want to make the world a better place. We can't even agree on what would be better.
The best joke ever from the Mike Judge show "Silicon Valley" is the tech CEO speaking to his employees: "I don't want to be a part of a world in which someone else is making the world a better place than we are."
We are all children of Adam Smith, who argued very effectively that our best interest is to have the butcher, the baker and the candlestick maker work in their own best interests.
So, sorry but when someone says they want to make the word a better place I think you are Stalin.
If you start a grants program for people who want to make the world a worse place, I'll link you.
Cool! Thanks.
I mean, didn't you write how the tails coming apart is a metaphor for life? I think there's nothing implausible about being in favor of everyone making the world moderately better, but not extremely better. Since if someone can make the world extremely better from their perspective, that probably involves making it worse for a lot of people. I think that's a not unreasonable intuition, but as a counterpoint, $250k probably can't make the world all that much better anyways.
I'm interested in what the people who want to make the world worse would do. Seems like the best thing we have are good institutions, so undermining them would be the way to make the world worse. But how would we do that? I mean faster than we are already?
I think Peter Thiel already has that covered
Before someone writes a thousand word rebuttal: that is mostly a joke, please don't link me to all the actual good companies the founders fund is working with
Nah, it was a good joke. I laughed.
This is very cool. But it'd be even cooler if it were easier to help you out with funding. I would love to donate to your capital pool here, but I think if I were to do that either, you'd have to pay income tax on the money, or i'd have to pay gift tax on it. This basically means one way or the other, half the money immediately goes to the government.
It seems extremely worthwhile to form a non-profit of some kind to prevent this. I think you could easily raise many millions of dollars for a project like this. I'm happy to pay the legal expenses to get it done.
I think good projects he does not fund, will go on to the blog and you can direct fund from there as one possibility. If you like the model, you can also consider donating to Emergent Ventures, although there you rely on Tyler Cowen's judgement.
Couple of questions:
1. Is there any limitation on what the money can be spend on? For example, some fellowships only allow for their funds to be spended on direct costs of the research (e.g. paying participants) but not as a salary for the researchers.
2. Do you have already an idea on how to check the progress of the projects, does one have to write some kind of report every other month? Do you have any other formal prerequisites like open data or preregistration?
3. Can one submit multiple proposals and if so, should you fill out the form just once or for every submission separately?
4. Are there any restrictions concerning the timing? For example if I still need half a year until I am finished with my phd, is it okay to wait until afterwards?
5. How high is the bar for "direct applicability"? If one for example does basic psychological research on happiness and wellbeing, but the research is not directly aimed at applying this to the real world and actually make people feel better, would that still have a chance on being picked?
Wonderful initiative!
1 question: It's unclear if non-US applicants are permited?
He specifically mentions being from a developing country, so seems like it's okay (might be more complex from a tax perspective though)
Ah, I'm a sloppy reader... Thanks!
Tossing this one out here in the hopes that someone else can pick it up....
Most cancers start as mutations in known oncogenes. They are genetically distinct from surrounding tissue. CRISPRs are good at acting when and only when they see a particular sequence. One could use the famous CRISPR-cas9 to transfect exclusively tumor cells with some sort of cytotoxin (an aggressive protease perhaps -- then once the cell is dead, the protease will destroy itself before the membrane lyses). Or CRISPR-cas12 cuts out the middleman and becomes a ravenous nuclease when it sees the trigger-sequence. Delivering these agents could probably be done with standard techniques. ISTR one of those is the envelope of a dna virus which cannot pass myelin -- that would be a useful extra safety feature for non-brain cancers.
How would one go about developing this?
First, take some convenient mammilian cells, transfect them with different colors of florescence, mix them in vitro, then kill one color with the CRISPR. This should require a minimum of sequencing, as the florescent proteins are known, and you just need to find a subsequence not present in the host genome. You can measure effectiveness and false-positive-rate optically, which should also be cheap. This lets you iterate freely on delivery mechanism.
Once that looks good, move to the in vivo version. Use some not-spreading-very-far vector to put florescent polka dots on mice, then remove them with a system-wide CRISPR.
Keep an eye out for behavioral signs of pain. The dying cells might leak enough ATP into the interstitial fluid to trigger an endovanniloid cascade. If so, better to learn sooner than later. Lidocaine may be all that's needed here, but maybe cortisone if there are signs of dangerous local inflammation.
If this part works, move on to cancers. Probably best to use skin cancer, since biopsies will be easier. This is where the sequencing may get expensive, since you need each mouse healthy and tumor, and ideally many cells from the tumor as single-cell-sequencing so that further mutations don't lead you astray.
Have three groups of test animals: no cancer, just cancer, and cancer plus cure. Then make all the comparisons in total lifespan and cancer biomarkers (pick a sufficiently short-lived species that total lifespan is practical to observe. The groups shouldn't need to be very big: just big enough that the proposition "cancer shortens lifespan" will show up unambiguously.
I think this could be done by a grad student with good wet-lab skills on the sort of budget Scott is offering. I have no wet lab skills, so I'm hoping someone who does reads this.
Beyond the cure mice phase, I see two options. One is to proceed to human trials as is classic. This will require more money, but hopefully at that point other attention will arrive. Such a study would probably compare "normal treatment" to "normal treatment plus this", and would keep the clinical oncologists blinded, but warn them all to check signs and adjust dosages frequently.
The other is to establish a commercial veterinary cancer-cure center and let people wonder why we can cure dogs but not people, and maybe whether a human could be slipped in by officially being an orangutan.
Doing this without official sanction looks an awful lot like animal abuse.
The only vertebrate you can really biohack is yourself.
If you have an academic affiliation, or half of one, you can probably get mouse approval pretty easily. If not, the in vitro steps are still a good start.
Alternatively, move to a rural state and insinuate a plan to eat the mice when you're done. Then sue anyone who accuses you of animal cruelty.
I am super excited about the general project and would love to apply, but am very uncreative when it comes to generating new and fruitful research ideas.
My background: I have a Masters in psychology with a focus on clinical neuroscience. I received 5 years of training in CBT and am about to finish my phd which is about basic research in predictive coding and its conceptions about what emotions are. I know my stuff around statistics and good methodology. If provided by the grant, I could buy testing time at all the machinery usually needed for neuropsychological research (fMRI, EEG, TMS, EDA, etc.). Through the institution I work for at the moment I could get access to patients with all kinds of mental illnesses to recruit them as participants for experiments. I don’t have any medical degree, which is why any research with medications would unfortunately be off the table.
This is an excellent idea. I run a much small microgrants programme (only 1K) based on the same concept basically borrowed from Tyler Cowen as well. www.ThenDoBetter.com/grants
Any advice for me?
My thoughts on this for you are… you need to think if you favour low probability but high reward ideas, or eg medium probability and medium reward idea (or if you are indifferent). You also need to consider how much you weight other access to capital. My grants tilt personal, as I am interested in giving capital to people who struggle to get capital from orthodox sources. I also weight fairly highly the chance that the person can complete the project in some way, even if the outcome of the project has a very low chance of success. A completed negative result is good.
The other surprising aspect, I found is that a number of applications I thought were good, but not suitable for me, that a few words of encourage and feedback set them off on a very positive path. This might have only been 5 to 10% of applications but that soft feedback was valuable to them.
If an application is a NO for you, don’t waste time going back and forth or questioning your decision too much. Just move on. This is partly a question of your judgement and luck (presuming you are funding low probability ideas) and speed is probably more valuable.
I would not disregard that this is an investment in the person or team as much as, or even more so, than the idea on occasion. At least it is for me. In that, it is part a talent bet and a bet that this talent with the access to the right capital will pay off, potentially big. EA can not reach and does not reach such talent, IMO.
As you rightly assess one of your largest assets is your network and following, so called social and relationship capital, and I would utilise that.
Very cool! I think it would be better if 'How much money do you need?' would be a long-form answer field. As it is right now, you can't see the formatting (or most of the text).
Will there be another one of these in the future?
Yeah this is really exciting, especially given the current problem EA has of trying to find highly scaleable projects. I'd be excited if people applied with projects that could use this money as a test something that could potentially absorb like millions-10s of millions of dollars.
Okay, speaking of Mad Scientist Malarkey, I've just read this new post over on The Renaissance Mathematicus and "Wow" is my first reaction.
Funding research on brain transplants during the Cold War - https://thonyc.wordpress.com/2021/11/10/would-you-like-a-new-body-for-that-brain-sir/
"Brain transplants are the subject of science fiction and Gothic horror, right? One of the most famous Gothic horror stories, Mary Shelley’s Frankenstein; or, The Modern Prometheus features a brain transplant, of which much is made in the various film versions. But in real life, a fantasy not a reality, or? Wrong, the American neurosurgeon Robert White (1926–2010) devoted most of his working life to the dream of transplanting a human brain, experimenting, and working towards fulfilment of this dream. I’m a voracious reader consuming, particularly in my youth, vast amounts of scientific and related literature, but I had never come across the work of Robert White, which took place during my lifetime. Thanks to Brandy Schillace, this lacuna in my knowledge has been more than filled, through her fascinating and disturbing book 'Mr. Humble and Dr. Butcher: A Monkey’s Head, the Pope’s Neuroscientist, and the Quest to transplant the Soul', which tells in great detail the story of Robert White’s dream and his attempts to fulfil it.
The title is of course a play on the title of Robert Louis Stevenson’s notorious Gothic novella Strange Case of Dr Jekyll and Mr Hyde, the story of a medically induced split personality, with a good persona and an evil one. Here, Mr Humble refers to the neurosurgeon Bob White, deeply religious, Catholic family father and brain surgeon, who always engaged 150% for his patients. A saint of a man, who everybody looked up to and admired.
Dr. Butcher refers to the research scientist Dr White, who carried out a, at times truly brutal, programme of animal experimentation on the way to his ultimate goal, the transplantation of a human brain."
Given that Scott seems to be doing well enough financially, I'd feel better about renewing my subscription if some of it was going to grants like this.
Somewhere there is an ACX post which lists the impact of various educational interventions (eg. class sizes, tutoring) but damned if I can find it. If anyone can point me in the right direction I would be grateful!
If you set up a way to give small donations into the pool, I'd be interested in donating, and I think others would too. I know there are other similar charities, but I'm kind of partial to Scott's judgement and network.
Two charities you should consider funding are the Center on Long-Term Risk and the Center for Reducing Suffering. These organizations are focused on reducing S-risks, or risks of astronomical suffering.
https://longtermrisk.org/
https://centerforreducingsuffering.org/
https://reducing-suffering.org/donation-recommendations/
https://www.youtube.com/watch?v=jiZxEJcFExc