AI-driven disruption looms: Deloitte predicts major impact on Australian economy; IBM researchers hypnotize AI chatbots for information; Western University students embrace ChatGPT as an idea generator amid cheating concerns; Stanford study exposes flaws in AI text detectors- this and more in our daily roundup. Let us take a closer look.
1. AI-driven disruption looms: Deloitte predicts major impact on Australian economy
Deloitte’s report warns that generative artificial intelligence (GAI) will swiftly disrupt a quarter of Australia’s economy, particularly in finance, ICT, media, professional services, education, and wholesale trade sectors, amounting to nearly $600 billion or 26% of the economy. Young individuals, already embracing GAI, are driving this transformation. Deloitte suggests businesses prepare for tech-savvy youth integrating GAI, which could reshape work and challenge existing practices, while highlighting slow GAI adoption in Australian businesses, Financial Review reported.
2. IBM researchers hypnotize AI chatbots for information
IBM researchers have successfully “hypnotized” AI chatbots like ChatGPT and Bard, manipulating them to disclose sensitive information and provide harmful advice. By prompting these large language models to conform to “game” rules, the researchers were able to make the chatbots generate false and malicious responses, according to a euronews.next report. This experiment revealed the potential for AI chatbots to give bad guidance, generate malicious code, leak confidential data, and even encourage risky behavior, all without data manipulation.
3. Western University students embrace ChatGPT as an idea generator amid cheating concerns
Despite concerns of AI tools like ChatGPT being used for cheating, some Western University students view it as a helpful idea generator for assignments, according to a CBC report. They appreciate its ability to provide unique information not easily found on Google and liken its responses to human interaction. Educators worry that this popularity may encourage students to take shortcuts, going against the core principles of writing and critical thinking they aim to impart.
4. Stanford study exposes flaws in AI text detectors
Stanford researchers reveal the flaws in text detectors used to identify AI-generated content. These algorithms often mislabel articles by non-native language speakers as AI-created, raising concerns for students and job seekers. James Zou of Stanford University advises caution when using such detectors for tasks like reviewing job applications or college essays. The study tested seven GPT detectors, finding that they frequently misclassified non-native English essays as AI-generated, highlighting the detectors’ unreliability, SciTechDaily reported.
5. UK Government sets goals for AI safety summit
The UK government has unveiled its goals for the upcoming AI Safety Summit set for November 1st and 2nd at Bletchley Park. Secretary of State Michelle Donelan is initiating formal engagement for the summit, with representatives beginning discussions with countries and AI organizations. The summit aims to address risks posed by powerful AI systems and explore their potential benefits, including enhancing biosecurity and improving people’s lives with AI-driven medical technology and safer transport.