Flask Parent Template

Duplicates Report

Score Entry 1 Entry 2
(1.00) What is the lottery ticket hypothesis? What is the lottery ticket hypothesis?
(1.00) What is relaxed adversarial training? What is relaxed adversarial training?
(1.00) Is AI alignment possible? Is AI alignment possible?
(1.00) What is Few-Shot Prompting? What is few-shot prompting?
(1.00) Wouldn’t any AI be constrained by the limited computing power in the world? Wouldn’t any AI be constrained by the limited computing power in the world?
(1.00) What is Contrast Consistent Search (CCS)? What is Contrast Consistent Search(CCS)?
(1.00) What are neuron families? What are neuron families?
(1.00) What happens if all goes well? What happens if all goes well?
(1.00) What are finite factored sets? What are finite factored sets?
(1.00) Will superhuman AI systems be goal directed? Will superhuman AI systems be goal directed?
(0.99) Would taking AI safety seriously lead to a totalitarian government? Would taking AI safety seriously lead to a totalitarian governments ?
(0.98) What is Conjecture's main research agenda? What is Conjecture's research agenda?
(0.98) Why can't we build an AI that is programmed to shut off after some time? Why can’t we build an AI that is programmed to turn off after some time?
(0.98) What is reward misspecification? What is “reward misspecification”?
(0.98) Why can't we just solve alignment through trial and error? Why can’t we just use trial and error to solve alignment?
(0.97) How would we evaluate if an AI is an AGI? How would we know if an AI is an AGI?
(0.97) Will future AIs want to solve the alignment problem? Will future AIs be able to solve the alignment problem?
(0.97) Can AI’s think and feel? Can AIs think and feel?
(0.97) What exactly does “AI alignment” mean? What is AI alignment?
(0.97) What is Stuart Armstrong's research strategy? What is Stuart Armstrong's research strategy
(0.96) What are some common objections to the need for AI alignment, and brief responses to these? What are some objections to the importance of AI alignment?
(0.96) What are some introductory videos about AI safety? Where can I find videos about AI safety?
(0.96) What are Responsible Scaling Policies (RSPs)? What is a responsible scaling policy (RSP)?
(0.96) Which organizations are working on AI alignment? What organizations are working on technical AI alignment?
(0.94) Where can I learn more about AI alignment? What are some good resources on AI alignment?
(0.94) How would we evaluate if an AI is an AGI? What criteria would we use to determine whether an AI counts as AGI?
(0.94) What is Few-Shot Prompting? What is few shot prompting?
(0.94) What is few-shot prompting? What is few shot prompting?
(0.94) Why would AGI be more dangerous than other technologies? We have dealt with dangerous technologies before, why is AGI different?
(0.93) What are AI "capabilities”? What is an AI capability?
(0.93) What is the UN AI Advisory Body? What is the United Nations High-Level Advisory Body on Artificial Intelligence?
(0.93) What is the weak scaling hypothesis? What is the strong scaling hypothesis?
(0.93) What is the general nature of the concern about AI alignment? What are some objections to the importance of AI alignment?
(0.92) Copy of What are the possible levels of difficulty for the alignment problem? What are the possible levels of difficulty for the alignment problem?
(0.92) What are current machine learning anchors? What are “current machine learning” anchors?
(0.92) What is offline reinforcement learning (RL)? What is online reinforcement learning (RL)?
(0.92) What is AI-assisted alignment? What is AI alignment?
(0.92) What is online reinforcement learning (RL)? What is reinforcement learning (RL)?
(0.92) What are some proposed training techniques to solve outer misalignment? What are some proposed training techniques to solve inner misalignment?
(0.91) What is least-to-most prompting? What is least to most prompting?
(0.91) Why would AGI be more dangerous than other technologies? Other technologies have been deemed potentially world-ending, why is AGI different?
(0.91) Should selfish people care about AI safety? Are there “selfish” reasons for caring about AI safety?
(0.91) What are some common objections to the need for AI alignment, and brief responses to these? What is the general nature of the concern about AI alignment?
(0.91) What are some good resources on AI alignment? I’d like to get deeper into the AI alignment literature. Where should I look?
(0.91) What are the capabilities of GPT-4? What is GPT-4 and what is it capable of?
(0.91) What criteria would we use to determine whether an AI counts as AGI? How would we know if an AI is an AGI?
(0.91) Where can I learn more about AI alignment? I’d like to get deeper into the AI alignment literature. Where should I look?
(0.90) We have dealt with dangerous technologies before, why is AGI different? Other technologies have been deemed potentially world-ending, why is AGI different?
(0.90) Introduction to AI Safety What are some introductions to AI safety?
(0.90) What is AI: Futures and Responsibility (AI:FAR)'s research agenda? What is FAR AI's research agenda?
(0.90) Superintelligence is unlikely? Is superintelligence soon really possible?
(0.90) Do we have an example/evidence of outer misalignment? Do we have an example/evidence of inner misalignment?
(0.90) What should I do with my machine learning research idea for AI alignment? What should I do with my idea for helping with AI alignment?
(0.90) Why would AGI want to self-improve or self-modify at all? Would AGI want to self-improve or self-modify at all?
(0.90) Would limiting AI development require an invasive global surveillance regime? Wouldn't slowing down or stopping AI require an invasive global surveillance regime?
(0.89) How plausible is AI existential risk? Do people seriously worry about existential risk from AI?
(0.89) What is Contrast Consistent Search (CCS)? What is Cross-Contrast Search (CCS)?
(0.89) What is Contrast Consistent Search(CCS)? What is Cross-Contrast Search (CCS)?
(0.89) What is out of context learning? What is in-context learning?
(0.89) Isn't the real concern AI-enabled authoritarianism? Isn't the real concern AI-enabled totalitarianism?
(0.89) What is a Task AI? What is a Task-directed AI?
(0.89) What is GPT-4? What is GPT-4 and what is it capable of?
(0.89) Why would AGI be more dangerous than other technologies? Why is AGI more dangerous than nanotechnology or biology?
(0.88) Is superintelligence soon really possible? Will we ever build a superintelligence?
(0.88) What are the different AI Alignment / Safety organizations and academics researching? Briefly, what are the major AI safety organizations and academics working on?
(0.88) What are some introductory videos about AI safety? What are some introductions to AI safety?
(0.88) Aren't there easy solutions to AI alignment? What would a good solution to AI alignment look like?
(0.88) Which alignment strategies can scale to superintelligence? What concrete work is being done on alignment strategies which won’t scale to superintelligence?
(0.88) Is AI alignment possible? Aren't there easy solutions to AI alignment?
(0.88) Aren't there easy solutions to AI alignment? Is AI alignment possible?
(0.88) Introduction to AI Safety New to AI safety? Start here.
(0.88) What is concept distribution shift ? What is a distributional shift?
(0.88) Can't we just tell an AI to do what we want? Can we tell an AI just to figure out what we want and then do that?
(0.87) Why not just raise AI like kids? Why can't we just make a "child AI" and raise it?
(0.87) How can I work on public AI safety outreach? How can I work on AI safety outreach in academia and among experts?
(0.87) What philosophical approaches are used in AI alignment? How can I do conceptual, mathematical, or philosophical work on AI alignment?
(0.87) What is the objective based perspective on the alignment problem? What is the optimization based perspective on the alignment problem?
(0.87) Could AI alignment research be bad? How? Might AI alignment research lead to outcomes worse than extinction?
(0.87) What are Francois Chollet’s criticisms of AI Alignment? What are Andrew Ng’s criticisms of AI Alignment?
(0.87) What is outer alignment? What is the difference between inner and outer alignment?
(0.87) What is meant by 'first critical try'? What is meant by 'first crucial try'?
(0.87) What exactly does “AI alignment” mean? What is AI-assisted alignment?
(0.87) What is reward modeling? What is recursive reward modeling?
(0.87) What else is on aisafety.info? What is aisafety.info about?
(0.87) What are some existing alignment strategies, and what are their pitfalls? What alignment strategies are scalably safe and competitive?
(0.87) Superintelligence is unlikely? Is the risk of superintelligence exaggerated?
(0.87) Does talk of existential risk from AI detract from current harms? Do people seriously worry about existential risk from AI?
(0.86) We have dealt with dangerous technologies before, why is AGI different? Why don't we just not build AGI if it's so dangerous?
(0.86) What are some proposed training techniques to solve outer misalignment? What are some proposed training techniques to solve deceptive misalignment?
(0.86) How could an intelligence explosion be useful? How might an "intelligence explosion" be dangerous?
(0.86) Why can't we just turn the AI off if it starts to misbehave? Can’t we just stop a misbehaving AI?
(0.86) What benchmarks exist for evaluating the safety of AI systems? What benchmarks exist for measuring the capabilities of AI systems?
(0.86) Why aren't more people worried if superintelligence is so dangerous? Why should I worry about superintelligence?
(0.86) Is AI alignment possible? What would a good solution to AI alignment look like?
(0.86) What would a good solution to AI alignment look like? Is AI alignment possible?
(0.86) Might an "intelligence explosion" never occur? How likely is an intelligence explosion?
(0.86) What does it mean for an AI to think? Can an AI really think?
(0.86) Why is AI alignment a hard problem? Aren't there easy solutions to AI alignment?
(0.86) Why is AI alignment a hard problem? Why do some AI researchers not worry about alignment?
(0.86) Why, in outline form, should we be concerned about advanced AI? Why are we scared of advanced AI?