A cobertura [noticiosa] da inteligência artificial (IA) parece frequentemente enfatizar os benefícios e o potencial económico da tecnologia em detrimento das suas crescentes desvantagens. Como é que uma tecnologia que se prepara para ser tão perturbadora se torna tão acriticamente aceite? Porque é que, em termos simples, as representações da IA nos meios de comunicação tradicionais não transmitem normalmente as controvérsias que, de outro modo, se encontram na investigação ou nos debates políticos?

[Neste artigo, introduz-se o conceito de “congelamento” (“freezing out”) para descrever os processos de tradução que arrefecem os debates sobre os méritos da tecnologia. O “freezing out” olha para o outro lado dos estudos de controvérsia para estudar a produção de incontroversas ou controvérsias frias em vez de tópicos e debates quentes.

Utilizamos a cobertura da IA nos meios de comunicação nacionais canadianos para analisar a forma como a controvérsia é “congelada”. (…) Recorrendo a entrevistas aprofundadas com jornalistas francófonos e anglófonos, bem como à modelação de tópicos em dados recolhidos de cinco grandes jornais, descobrimos que os processos rotineiros de produção de notícias entre jornalistas, especialistas, empresários e governos constroem, mantêm e promovem o ecossistema de IA do Canadá.

O “freezing out” contribui para um interesse mais vasto na forma como actores heterogéneos atravessam o seu domínio de especialização em círculos políticos, mediáticos e de investigação para arrefecer controvérsias sobre a inteligência artificial.

Resumo de “Freezing out: Legacy media’s shaping of AI as a cold controversy


Theory Is All You Need: AI, Human Cognition, and Decision Making: Artificial intelligence (AI) now matches or outperforms human intelligence in an astonishing array of games, tests, and other cognitive tasks that involve high-level reasoning and thinking. Many scholars argue that—due to human bias and bounded rationality—humans should (or will soon) be replaced by AI in situations involving high-level cognition and strategic decision making. We disagree.

Cognition is All You Need — The Next Layer of AI Above Large Language Models: In this position paper, we present Cognitive AI, a higher-level framework for implementing programmatically defined neuro-symbolic cognition above and outside of large language models. Specifically, we propose a dual-layer functional architecture for Cognitive AI that serves as a roadmap for AI systems that can perform complex multi-step knowledge work. We propose that Cognitive AI is a necessary precursor for the evolution of higher forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own.

A Safe Harbor for AI Evaluation and Red Teaming: An argument for legal and technical safe harbors for AI safety and trustworthiness research

Is AI an Existential Risk? Q&A with RAND Experts

UN General Assembly adopts first-ever resolution on AI: The adoption of the first UN resolution on AI could mark a key stride towards fostering a global framework that promotes the responsible and inclusive utilisation of this transformative technology, underscoring the imperative of aligning AI advancements with the collective welfare of humanity.

US, Britain announce partnership on AI safety, testing: “We all know AI is the defining technology of our generation,” [Commerce Secretary Gina] Raimondo said. “This partnership will accelerate both of our institutes work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society.” Britain and the United States are among countries establishing government-led AI safety institutes.

AI writing, illustration emits hundreds of times less carbon than humans: With the evolution of artificial intelligence comes discussion of the technology’s environmental impact. A new study has found that for the tasks of writing and illustrating, AI emits hundreds of times less carbon than humans performing the same tasks. That does not mean, however, that AI can or should replace human writers and illustrators, the study’s authors argue.

200+ Artists Urge Tech Platforms: Stop Devaluing Music; Top musicians among hundreds warning against replacing human artists with AI: More than 200 musical artists — including heavy hitters such as Billie Eilish, Katy Perry and Smokey Robinson — have penned an open letter to AI developers, tech firms and digital platforms to “cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists.”

Chinese mourners turn to AI to remember and ‘revive’ loved ones: Growing interest in services that create digital clones of the dead

Teachers are using AI to grade essays. But some experts are raising ethical concerns

Explosive growth from AI automation: A review of the arguments: We examine whether substantial AI automation could accelerate global economic growth by about an order of magnitude, akin to the economic growth effects of the Industrial Revolution. We identify three primary drivers for such growth: 1) the scalability of an AI “labor force” restoring a regime of increasing returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive increase in output from rapid automation occurring over a brief period of time. Against this backdrop, we evaluate nine counterarguments, including regulatory hurdles, production bottlenecks, alignment issues, and the pace of automation. We tentatively assess these arguments, finding most are unlikely deciders. We conclude that explosive growth seems plausible with AI capable of broadly substituting for human labor, but high confidence in this claim seems currently unwarranted. Key questions remain about the intensity of regulatory responses to AI, physical bottlenecks in production, the economic value of superhuman abilities, and the rate at which AI automation could occur.

Generative AI for Economic Research: Use Cases and Implications for Economists: At this point, human researchers, especially when AI-assisted, are still the best technology around for generating economic research!

Why Should News Organizations (Not) Build an LLM? Integrating Large Language Models (LLMs) into the newsroom has the potential to unlock a myriad of opportunities for news organizations in tasks relevant to content creation and editing, as well as news gathering and distribution. But as newsrooms continue to explore the avenues and prospects for harnessing LLMs, a question arises around the strategic and competitive use of the technology: should news organizations strive to train their own LLMs?

In this post I argue that news organizations (especially those with limited resources) that use prompt engineering, fine-tuning, and retrieval augmented generation (RAG) to enhance their productivity and offerings will be strategically better off than if they train their own LLMs from scratch.