Flask Parent Template

Duplicates Report

Score Entry 1 Entry 2
(1.00) What are some AI alignment research agendas currently being pursued? What are some AI alignment research agendas currently being pursued?
(1.00) What is the lottery ticket hypothesis? What is the lottery ticket hypothesis?
(1.00) Is AI alignment possible? Is AI alignment possible?
(1.00) Wouldn’t any AI be constrained by the limited computing power in the world? Wouldn’t any AI be constrained by the limited computing power in the world?
(1.00) What are neuron families? What are neuron families?
(1.00) Can safe AI design be competitive? Can safe AI design be competitive?
(1.00) How is Beth Barnes evaluating LM power seeking? How is Beth Barnes evaluating LM power seeking?
(1.00) Will superhuman AI systems be goal directed? Will superhuman AI systems be goal directed?
(1.00) What are finite factored sets? What are finite factored sets?
(1.00) What is relaxed adversarial training? What is relaxed adversarial training?
(1.00) How is AGI different from current AI? How is AGI different from current AI?
(0.99) Would taking AI safety seriously lead to a totalitarian government? Would taking AI safety seriously lead to a totalitarian governments ?
(0.98) What is Conjecture's main research agenda? What is Conjecture's research agenda?
(0.98) Why can't we build an AI that is programmed to shut off after some time? Why can’t we build an AI that is programmed to turn off after some time?
(0.98) What is reward misspecification? What is “reward misspecification”?
(0.98) Why can't we just solve alignment through trial and error? Why can’t we just use trial and error to solve alignment?
(0.97) Will future AIs want to solve the alignment problem? Will future AIs be able to solve the alignment problem?
(0.97) What exactly does “AI alignment” mean? What is AI alignment?
(0.96) What are some common objections to the need for AI alignment, and brief responses to these? What are some objections to the importance of AI alignment?
(0.96) What are some introductory videos about AI safety? Where can I find videos about AI Safety?
(0.96) Which organizations are working on AI alignment? What organizations are working on technical AI alignment?
(0.94) Where can I learn more about AI alignment? What are some good resources on AI alignment?
(0.94) Why would AGI be more dangerous than other technologies? We have dealt with dangerous technologies before, why is AGI different?
(0.93) Which groups are leading AI capabilities development? Who is leading AI capabilities development?
(0.93) What is the UN AI Advisory Body? What is the United Nations High-Level Advisory Body on Artificial Intelligence?
(0.93) What is the weak scaling hypothesis? What is the strong scaling hypothesis?
(0.93) What is the general nature of the concern about AI alignment? What are some objections to the importance of AI alignment?
(0.92) What is required for an AI to be aligned? What are the requirements for AI alignment?
(0.92) Copy of What are the possible levels of difficulty for the alignment problem? What are the possible levels of difficulty for the alignment problem?
(0.92) What is offline reinforcement learning (RL)? What is online reinforcement learning (RL)?
(0.92) What is AI-assisted alignment? What is AI alignment?
(0.92) What is online reinforcement learning (RL)? What is reinforcement learning (RL)?
(0.92) What are some proposed training techniques to solve outer misalignment? What are some proposed training techniques to solve inner misalignment?
(0.91) What is least-to-most prompting? What is least to most prompting?
(0.91) Why would AGI be more dangerous than other technologies? Other technologies have been deemed potentially world-ending, why is AGI different?
(0.91) Should selfish people care about AI safety? Are there “selfish” reasons for caring about AI safety?
(0.91) What are some common objections to the need for AI alignment, and brief responses to these? What is the general nature of the concern about AI alignment?
(0.91) Which organizations are working on AI alignment? What approaches are AI alignment organizations working on?
(0.91) What are some good resources on AI alignment? I’d like to get deeper into the AI alignment literature. Where should I look?
(0.91) What are the capabilities of GPT-4? What is GPT-4 and what is it capable of?
(0.91) What is the easy goal inference problem? What is the goal inference problem?
(0.91) Where can I learn more about AI alignment? I’d like to get deeper into the AI alignment literature. Where should I look?
(0.90) We have dealt with dangerous technologies before, why is AGI different? Other technologies have been deemed potentially world-ending, why is AGI different?
(0.90) Do we have an example/evidence of outer misalignment? Do we have an example/evidence of inner misalignment?
(0.90) What should I do with my machine learning research idea for AI alignment? What should I do with my idea for helping with AI alignment?
(0.90) Why would AGI want to self-improve or self-modify at all? Would AGI want to self-improve or self-modify at all?
(0.89) How plausible is AI existential risk? Do people seriously worry about existential risk from AI?
(0.89) What is out of context learning? What is in-context learning?
(0.89) What is GPT-4? What is GPT-4 and what is it capable of?
(0.89) Why would AGI be more dangerous than other technologies? Why is AGI more dangerous than nanotechnology or biology?
(0.88) Is superintelligence soon really possible? Will we ever build a superintelligence?
(0.88) What are the different AI Alignment / Safety organizations and academics researching? Briefly, what are the major AI safety organizations and academics working on?
(0.88) What are some introductory videos about AI safety? What are some introductions to AI safety?
(0.88) Aren't there easy solutions to AI alignment? What would a good solution to AI alignment look like?
(0.88) Which alignment strategies can scale to superintelligence? What concrete work is being done on alignment strategies which won’t scale to superintelligence?
(0.88) Is AI alignment possible? Aren't there easy solutions to AI alignment?
(0.88) Aren't there easy solutions to AI alignment? Is AI alignment possible?
(0.88) What are some good books about AGI safety? What AGI safety reading lists are there?
(0.88) What is cognitive emulation? What is cognitive emulation (CoEm)?
(0.88) What is concept distribution shift ? What is a distributional shift?
(0.88) Can't we just tell an AI to do what we want? Can we tell an AI just to figure out what we want and then do that?
(0.87) Why not just raise AI like kids? Why can't we just make a "child AI" and raise it?
(0.87) What philosophical approaches are used in AI alignment? How can I do conceptual, mathematical, or philosophical work on AI alignment?
(0.87) What is the objective based perspective on the alignment problem? What is the optimization based perspective on the alignment problem?
(0.87) What are some AI alignment research agendas currently being pursued? What approaches are AI alignment organizations working on?
(0.87) What are some AI alignment research agendas currently being pursued? What approaches are AI alignment organizations working on?
(0.87) What are Francois Chollet’s criticisms of AI Alignment? What are Andrew Ng’s criticisms of AI Alignment?
(0.87) What is outer alignment? What is the difference between inner and outer alignment?
(0.87) What is meant by 'first critical try'? What is meant by 'first crucial try'?
(0.87) What exactly does “AI alignment” mean? What is AI-assisted alignment?
(0.87) What is reward modeling? What is recursive reward modeling?
(0.87) What are some existing alignment strategies, and what are their pitfalls? What alignment strategies are scalably safe and competitive?
(0.86) We have dealt with dangerous technologies before, why is AGI different? Why don't we just not build AGI if it's so dangerous?
(0.86) What are some proposed training techniques to solve outer misalignment? What are some proposed training techniques to solve deceptive misalignment?
(0.86) How could an intelligence explosion be useful? How might an "intelligence explosion" be dangerous?
(0.86) Why aren't more people worried if superintelligence is so dangerous? Why should I worry about superintelligence?
(0.86) Is AI alignment possible? What would a good solution to AI alignment look like?
(0.86) What would a good solution to AI alignment look like? Is AI alignment possible?
(0.86) Might an "intelligence explosion" never occur? How likely is an intelligence explosion?
(0.86) What are some of the leading AI capabilities organizations? Which groups are leading AI capabilities development?
(0.86) What does it mean for an AI to think? Can an AI really think?
(0.86) Why is AI alignment a hard problem? Aren't there easy solutions to AI alignment?
(0.86) What organizations are working on technical AI alignment? What approaches are AI alignment organizations working on?
(0.86) Why is AI alignment a hard problem? Why do some AI researchers not worry about alignment?
(0.86) Why is AI alignment a hard problem? How does AI taking things literally contribute to alignment being hard?
(0.86) Could AI alignment research be bad? How? What is the general nature of the concern about AI alignment?
(0.86) Why would AGI be more dangerous than other technologies? Why don't we just not build AGI if it's so dangerous?
(0.86) We have dealt with dangerous technologies before, why is AGI different? Why is AGI more dangerous than nanotechnology or biology?
(0.86) What concepts underlie existential risk from AI? What are the main sources of AI existential risk?
(0.86) What is AI safety? What is AI risk?
(0.85) Could AI alignment research be bad? How? What are some objections to the importance of AI alignment?
(0.85) What is reward design? What is reward modeling?
(0.85) What is decision theory? What is "logical decision theory"?
(0.85) What is a distribution shift and how is it related to alignment? What is a distributional shift?
(0.85) What is an intelligence explosion? How could an intelligence explosion be useful?
(0.85) What are Francois Chollet’s criticisms of AI Alignment? What are Yann LeCun’s criticisms of AI Alignment?
(0.85) Why do some AI researchers not worry about alignment? What is the general nature of the concern about AI alignment?
(0.85) What are the arguments for a slow takeoff? Why might we expect a fast takeoff?
(0.85) What is everyone working on in AI alignment? What approaches are AI alignment organizations working on?
(0.85) What is narrow reward modeling? What is reward modeling?