Flask Parent Template

Duplicates Report

Score Entry 1 Entry 2
(1.00) What is relaxed adversarial training? What is relaxed adversarial training?
(1.00) What are finite factored sets? What are finite factored sets?
(1.00) What beneficial things would an aligned superintelligence be able to do? What beneficial things would an aligned superintelligence be able to do?
(1.00) How could an intelligence explosion be useful? How could an intelligence explosion be useful?
(0.98) What is Conjecture's main research agenda? What is Conjecture's research agenda?
(0.98) Why can't we build an AI that is programmed to shut off after some time? Why can’t we build an AI that is programmed to turn off after some time?
(0.97) What exactly does “AI alignment” mean? What is AI alignment?
(0.96) What are some introductory videos about AI safety? Where can I find videos about AI safety?
(0.96) What are Responsible Scaling Policies (RSPs)? What is a responsible scaling policy (RSP)?
(0.96) Which organizations are working on AI alignment? What organizations are working on technical AI alignment?
(0.93) What is the weak scaling hypothesis? What is the strong scaling hypothesis?
(0.93) What is the general nature of the concern about AI alignment? What are some objections to the importance of AI alignment?
(0.92) What is offline reinforcement learning (RL)? What is online reinforcement learning (RL)?
(0.92) What is AI-assisted alignment? What is AI alignment?
(0.92) What is online reinforcement learning (RL)? What is reinforcement learning (RL)?
(0.92) What are some proposed training techniques to solve outer misalignment? What are some proposed training techniques to solve inner misalignment?
(0.92) The alignment problem How can we solve the alignment problem?
(0.91) Should selfish people care about AI safety? Are there “selfish” reasons for caring about AI safety?
(0.91) Other resources Resources elsewhere
(0.91) Is AI alignment easy? Aren't there easy solutions to AI alignment?
(0.91) How can I help AI alignment researchers be more effective? Are there promising ways to make AI alignment researchers smarter?
(0.91) What are the capabilities of GPT-4? What is GPT-4 and what is it capable of?
(0.91) Why is AI alignment a hard problem? At a high level, what is the challenge of AI alignment?
(0.90) What is AI: Futures and Responsibility (AI:FAR)'s research agenda? What is FAR AI's research agenda?
(0.90) What is this site about? What is this website about?
(0.90) Do we have an example/evidence of outer misalignment? Do we have an example/evidence of inner misalignment?
(0.90) What should I do with my machine learning research idea for AI alignment? What should I do with my idea for helping with AI alignment?
(0.89) What is Contrast Consistent Search (CCS)? What is Cross-Contrast Search (CCS)?
(0.89) What is the strong scaling hypothesis? What is the scaling hypothesis?
(0.89) What is out of context learning? What is in-context learning?
(0.89) Isn't the real concern AI-enabled authoritarianism? Isn't the real concern AI-enabled totalitarianism?
(0.89) Is AI alignment easy? Is AI alignment possible?
(0.89) What can't AI do yet? What will an AI never be able to do?
(0.89) What is the UK's AI Security Institute? What is the UK’s AI Safety Institute?
(0.89) What is a Task AI? What is a Task-directed AI?
(0.89) What is GPT-4? What is GPT-4 and what is it capable of?
(0.89) What are "reasoning" AI models? What are “Simulated reasoning” AI models?
(0.89) Would a misaligned superintelligence kill literally everyone? Why would a misaligned superintelligence kill everyone?
(0.89) Why do people disagree on the likelihood of existential risks from AI? What are some arguments against AI being an existential risk?
(0.88) What are the different AI Alignment / Safety organizations and academics researching? Briefly, what are the major AI safety organizations and academics working on?
(0.88) Aren't there easy solutions to AI alignment? What would a good solution to AI alignment look like?
(0.88) Which alignment strategies can scale to superintelligence? What concrete work is being done on alignment strategies which won’t scale to superintelligence?
(0.88) What other options are there for pursuing a technical career in AI alignment? How can I build a career in AI alignment?
(0.88) Aren't there easy solutions to AI alignment? Is AI alignment possible?
(0.88) How powerful could a superintelligence become? Isn't it impossible for a superintelligence to become very powerful?
(0.88) Why do people disagree on the likelihood of existential risks from AI? Do people seriously worry about existential risk from AI?
(0.88) Future AI Predictions about future AI
(0.88) What is the general nature of the concern about AI alignment? At a high level, what is the challenge of AI alignment?
(0.88) What is concept distribution shift ? What is a distributional shift?
(0.88) Can't we just tell an AI to do what we want? Can we tell an AI just to figure out what we want and then do that?
(0.88) Why not just control AI? Why not just let AI take over?
(0.87) How can I work on public AI safety outreach? How can I work on AI safety outreach in academia and among experts?
(0.87) Alignment techniques Other alignment approaches
(0.87) What philosophical approaches are used in AI alignment? How can I do conceptual, mathematical, or philosophical work on AI alignment?
(0.87) What is the objective based perspective on the alignment problem? What is the optimization based perspective on the alignment problem?
(0.87) Could AI alignment research be bad? How? Might AI alignment research lead to outcomes worse than extinction?
(0.87) What are soft optimizers? What is soft optimization?
(0.87) What are Francois Chollet’s criticisms of AI Alignment? What are Andrew Ng’s criticisms of AI Alignment?
(0.87) What is outer alignment? What is the difference between inner and outer alignment?
(0.87) What is scaffolding? What are scaffolds?
(0.87) What exactly does “AI alignment” mean? What is AI-assisted alignment?
(0.87) What is reward modeling? What is recursive reward modeling?
(0.87) What are some existing alignment strategies, and what are their pitfalls? What alignment strategies are scalably safe and competitive?
(0.87) Does talk of existential risk from AI detract from current harms? Do people seriously worry about existential risk from AI?
(0.86) What are some objections to the importance of AI alignment? At a high level, what is the challenge of AI alignment?
(0.86) What are some proposed training techniques to solve outer misalignment? What are some proposed training techniques to solve deceptive misalignment?
(0.86) What are soft optimizers? What are mild optimizers?
(0.86) How could an intelligence explosion be useful? How might an "intelligence explosion" be dangerous?
(0.86) How could an intelligence explosion be useful? How might an "intelligence explosion" be dangerous?
(0.86) What benchmarks exist for evaluating the safety of AI systems? What benchmarks exist for measuring the capabilities of AI systems?
(0.86) Why aren't more people worried if superintelligence is so dangerous? Why should I worry about superintelligence?
(0.86) What would a good solution to AI alignment look like? Is AI alignment possible?
(0.86) Might an "intelligence explosion" never occur? How likely is an intelligence explosion?
(0.86) What does it mean for an AI to think? Can an AI really think?
(0.86) Why is AI alignment a hard problem? Aren't there easy solutions to AI alignment?
(0.86) Why is AI alignment a hard problem? Why do some AI researchers not worry about alignment?
(0.86) Difficulty of alignment How difficult should we expect alignment to be?
(0.86) Why is AI alignment a hard problem? How does AI taking things literally contribute to alignment being hard?
(0.86) Could AI alignment research be bad? How? What is the general nature of the concern about AI alignment?
(0.86) A case for AI safety What is AI safety?
(0.86) Why would AGI be more dangerous than other technologies? Why don't we just not build AGI if it's so dangerous?
(0.85) Could AI alignment research be bad? How? What are some objections to the importance of AI alignment?
(0.85) What is reward design? What is reward modeling?
(0.85) What is decision theory? What is "logical decision theory"?
(0.85) Governance research organizations Governance research
(0.85) What is a distribution shift and how is it related to alignment? What is a distributional shift?
(0.85) How could an intelligence explosion be useful? What is an intelligence explosion?
(0.85) What is an intelligence explosion? How could an intelligence explosion be useful?
(0.85) What are Francois Chollet’s criticisms of AI Alignment? What are Yann LeCun’s criticisms of AI Alignment?
(0.85) What is an "AI doomer"? What is “AI doomerism”?
(0.85) Why do some AI researchers not worry about alignment? What is the general nature of the concern about AI alignment?
(0.85) Superintelligence Implications of superintelligence
(0.85) What are the arguments for a slow takeoff? Why might we expect a fast takeoff?
(0.85) How can I contribute to AI safety as a volunteer? What are some simple things I can do to contribute to AI safety?
(0.85) What is narrow reward modeling? What is reward modeling?
(0.85) Why do some AI researchers not worry about alignment? What are some objections to the importance of AI alignment?
(0.85) What are Andrew Ng’s criticisms of AI Alignment? What are Richard Ngo's views on AI alignment?
(0.85) Should selfish people care about AI safety? If I only care about helping people alive today, does AI safety still matter?
(0.85) What is Eliezer Yudkowsky's view on AI alignment? What are Richard Ngo's views on AI alignment?
(0.85) Why do people disagree on the likelihood of existential risks from AI? Does talk of existential risk from AI detract from current harms?