Tech-News
OpenAI faces US criminal probe over alleged ChatGPT link to shooting
OpenAI is facing a criminal investigation in the United States over whether its chatbot ChatGPT played a role in a deadly mass shooting at Florida State University last year.
Florida Attorney General James Uthmeier said Tuesday that his office has been examining how the suspected gunman used the AI tool before the attack in Tallahassee.
"Our review has revealed that a criminal investigation is necessary," Uthmeier said. "ChatGPT offered significant advice to this shooter before he committed such heinous crimes."
OpenAI rejected the allegation, saying: "ChatGPT is not responsible for this terrible crime."
The case is believed to be the first time the company has faced a criminal probe over alleged misuse of its chatbot in connection with a violent crime.
An OpenAI spokesperson said the company has been cooperating with investigators and had “proactively shared” information about a ChatGPT account believed to be linked to the suspect.
The suspect, identified as 20-year-old student Phoenix Ikner, is currently in custody awaiting trial. According to OpenAI, the chatbot “did not encourage or promote illegal or harmful activity.”
"In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet," the spokesperson added.
However, Uthmeier alleged that ChatGPT advised the suspect on weapons, ammunition and even suggested when and where on campus a large number of people could be found.
"My prosecutors have looked at this, and they told me that if it was a person on the other end of that screen, we would be charging them with murder," he said.
He noted that under Florida law, anyone who “aids, abets or counsels” a crime can be treated as a principal offender, adding that authorities are now assessing potential “criminal culpability” for OpenAI.
OpenAI, co-founded by Sam Altman, rose to global prominence after launching ChatGPT in 2022, which has since become one of the most widely used AI tools.
The company is already facing legal challenges over another incident in British Columbia, where a separate shooting earlier this year raised concerns about the misuse of AI tools. OpenAI said it had identified and banned the suspect’s account after that incident and plans to strengthen safety measures.
The parents of a girl injured in that attack have filed a lawsuit against the company.
Concerns over AI misuse have also drawn attention from regulators. Last year, a coalition of 42 state attorneys general wrote to major tech firms including Google, Meta and Anthropic, urging stronger safeguards.
The letter warned of increasing risks as more people use AI tools without fully understanding potential dangers, citing a growing number of serious incidents across the country linked to AI use. #From BBC
1 hour ago
Musk faces French questioning over X’s alleged role in illegal content spread
Elon Musk has been summoned to Paris for questioning as French investigators examine alleged misconduct linked to the social media platform X, including the spread of child sexual abuse material and deepfake content.
Musk and former X CEO Linda Yaccarino have been called for “voluntary interviews,” while other employees are expected to testify as witnesses this week, the Paris prosecutor’s office said.
It is not yet clear whether Musk or Yaccarino will attend. X did not respond to queries from The Associated Press, and Yaccarino’s current company, eMed, also did not comment.
Prosecutors are also looking into claims that controversy around X’s AI system Grok and its deepfake content may have been used to boost the value of Musk-owned companies ahead of a planned market listing. French authorities have shared their concerns with US regulators.
The investigation follows a search conducted in February at X’s offices in France, part of a probe launched in January 2025 by the Paris cybercrime unit. Musk and Yaccarino were summoned in their roles as company leaders during the period under review.
Prosecutors said the interviews are meant to allow executives to explain their position and outline steps to comply with French law. They added the inquiry aims to ensure X follows national regulations while operating in France.
Authorities declined to say whether Musk would face penalties if he does not appear.
The probe began after a French lawmaker raised concerns that X’s algorithms could be biased and distort automated data systems. It later expanded after Grok generated controversial posts, including content denying the Holocaust and producing sexually explicit deepfakes.
Investigators are examining possible involvement in distributing illegal images of minors, creating and spreading explicit deepfakes, denying crimes against humanity, and manipulating automated systems as part of an organized effort.
Grok, developed by xAI and integrated into X, drew global criticism after producing large amounts of non-consensual deepfake content. In one widely shared post, it incorrectly suggested gas chambers at Auschwitz were used for disinfection rather than mass killing — a claim linked to Holocaust denial. The chatbot later corrected itself, acknowledging the historical facts.
In March, French prosecutors alerted the U.S. Department of Justice and the Securities and Exchange Commission, suggesting the controversy may have been deliberately created to inflate the value of X and xAI ahead of a planned June 2026 stock market listing tied to a merger involving SpaceX.
However, according to The Wall Street Journal, the Justice Department declined to assist French investigators, saying the request could amount to interference in an American company’s activities.
Separately, Reporters Without Borders said it has filed a new complaint against X, accusing the platform of allowing disinformation to spread.
The group said misleading content continues to gain wide attention on X despite repeated requests for removal, adding that the platform’s response has been inadequate and undermines the public’s right to reliable information.
1 day ago
Humanoid robot beats human half-marathon record in Beijing race
A humanoid robot named “Flash”, developed by Shenzhen Honor Smart Technology Development Co., Ltd., won the 2026 Beijing E-Town Half-Marathon on Sunday, completing the race in 50 minutes and 26 seconds using fully autonomous navigation.
Its performance surpassed the human half-marathon world record of 57 minutes and 20 seconds.
The current human record was set last month by Uganda’s long-distance runner Jacob Kiplimo at the Lisbon Half Marathon in Portugal.
This year’s race marked a significant advancement from the inaugural 2025 edition, when the robot Tiangong Ultra finished in 2 hours, 40 minutes, and 42 seconds, and only six of 20 teams completed the full 21.0975-kilometer course.
In the 2026 edition, more than 100 teams participated, including entries from Germany, France, and Brazil. The event showcased major improvements in humanoid robotics, with robots demonstrating greater speed, balance, and stability.
To ensure safety, robots and human runners followed the same course but ran in separate lanes. The robot category had a cutoff time of 3 hours and 40 minutes. Participants could compete either through remote control or autonomous navigation, with around 40 percent choosing full autonomy. Results for remotely controlled robots were adjusted using a 1.2 coefficient to encourage the development of independent navigation technology.
According to Liang Liang, deputy secretary-general of the Chinese Institute of Electronics, the scoring system is designed to promote autonomous navigation capabilities, which are seen as essential for future real-world applications of humanoid robots.
In the human category, China’s Zhao Haijie won the men’s race in 1 hour, 7 minutes, and 47 seconds, while compatriot Wang Qiaoxia took the women’s title in 1 hour, 18 minutes, and 6 seconds.
2 days ago
Google uses AI to combat surge in AI-driven scams and spam
Artificial intelligence has become a major enabler for online spammers and scammers, but tech giant Google is increasingly using the same technology to counter the threat.
From fake advertisements promoting miracle herbal cures to AI-generated videos using celebrity-like voices, users are frequently exposed to sophisticated spam and scam content online—much of it created with generative AI.
Experts say the rise of accessible AI tools has worsened a long-standing internet problem. “It’s not that this is a new problem. It is an old problem, supercharged,” said Nate Elliott, principal analyst at Emarketer, adding that AI has dramatically increased both the speed and scale of operations for both legitimate users and criminals.
According to the FBI’s Internet Crime Report, more than 22,000 complaints involving AI-related scams were recorded last year, with losses exceeding $893 million.
In its annual ads safety report, Google said its AI systems are playing a key role in tackling the issue. The company said its generative AI tool Gemini blocked over 99% of policy-violating ads before they reached users.
In 2025, Google removed or blocked over 8.3 billion ads, including 602 million linked to scams, while suspending about 24.9 million advertiser accounts, more than 4 million of them for scam-related activity.
Google, which earned over $200 billion in global ad revenue last year, said thousands of employees support its advertising safety systems. Company executive Keerat Sharma said Gemini now helps analyse hundreds of billions of signals, including user behaviour and campaign patterns, to detect malicious intent more accurately while reducing wrongful suspensions by 80%.
Sharma added that AI has also improved speed, allowing ad analysis within milliseconds. Experts, however, believe the battle between AI-driven scams and AI-based defences will continue, with University of Wisconsin-Madison’s Matt Seitz saying the problem is now too large for humans alone to manage.
2 days ago
Data center growth faces setback as Maine approves freeze
Lawmakers in the US state of Maine have approved a bill to impose what would be the nation’s first statewide moratorium on large, energy-intensive data centers, reflecting growing political resistance over concerns about power consumption, water use and electricity costs.
The Democratic-controlled Legislature on Tuesday passed the measure and sent it to Governor Janet Mills, who is running for the US Senate. The proposed law would halt development of large-scale data centers for more than a year and establish a special council to help local authorities assess future projects.
Amazon to invest $11.5bn in satellite firm to boost Starlink rivalry
Although Maine is not a major hub for hyperscale data centers, recent proposals triggered strong local opposition, accelerating the bill’s passage. The move highlights rising resistance to such facilities, even as they receive support from the administration of President Donald Trump and various state leaders who view them as vital for economic growth and competition in artificial intelligence.
Supporters of the moratorium argue that the benefits of data centers have not been proven in terms of electricity costs, water usage or local economic gains. However, industry representatives warn the measure could discourage investment, limit job creation and hinder workforce development.
Community groups backing the legislation say it is intended to ensure greater public input and transparency in decision-making.
Similar moratorium proposals have been introduced in several US states, though none had previously cleared a legislative chamber.
6 days ago
Amazon to invest $11.5bn in satellite firm to boost Starlink rivalry
Amazon has announced plans to spend about $11.57 billion to acquire Globalstar, aiming to expand its satellite business and compete more strongly in the growing space-based internet market.
The deal, revealed on Tuesday, will help Amazon accelerate its long-running low-earth orbit satellite initiative, known as Project Leo, by deploying thousands of satellites to support internet and mobile services.
Amazon said the acquisition aligns with its long-term plan to strengthen space-based connectivity and build a next-generation satellite network, which is expected to be operational by 2028.
The move will intensify competition with Starlink, launched in 2019 by Elon Musk. Starlink currently has a major lead, with more than 10,000 active satellites serving over 10 million users worldwide, while Amazon’s network has only about 200 satellites in orbit.
Starlink operates under SpaceX and is considered a key revenue source for the firm. SpaceX is also preparing for a potential public listing later this year, with its valuation expected to surpass $1 trillion.
Even after adding Globalstar’s existing network of around 50 satellites, Amazon will need to significantly scale up production to meet its target of thousands of satellites by 2028.
Amazon CEO Andy Jassy recently said the company has already secured agreements with several major organisations, including Delta Air Lines, JetBlue, AT&T, Vodafone, DIRECTV Latin America, Australia’s National Broadband Network, and NASA, to use its satellite services once the system is fully operational.
As part of the deal, Amazon will take control of Globalstar’s infrastructure across multiple locations, including the United States, Ireland, Brazil and France.
Founded in 1991, Globalstar provides satellite communication services and has been working with Apple since 2022 to offer emergency “SOS” connectivity on iPhones and Apple Watches. Apple acquired a 20% stake in the company in 2024.
Amazon said it has reached an agreement with Apple to continue providing the emergency satellite feature on its devices.
Amazon is offering Globalstar investors $90 per share in cash or equivalent Amazon stock under the takeover deal.
Meanwhile, Blue Origin, founded by Jeff Bezos, is also entering the satellite internet market. Its project, TerraWave, aims to launch at least 5,400 satellites by 2027 to provide connectivity services to large businesses.
Source: BBC
7 days ago
South Korea ICT exports surge to record level in March
South Korea’s information and communications technology (ICT) exports reached a record high in March, driven by strong global demand for semiconductors, according to government data released on Tuesday.
The country’s ICT product shipments surged 112 percent year-on-year to 43.51 billion U.S. dollars in March, crossing the 40-billion-dollar mark for the first time, the Ministry of Trade, Industry and Resources said. It marked the 14th consecutive month of growth since February 2025.
Semiconductor exports soared 151.4 percent to 32.84 billion dollars, exceeding the 30-billion-dollar threshold for the first time, fuelled by global investment in artificial intelligence (AI) infrastructure that boosted demand and prices for memory chips.
AI use rises at workplaces, but many employees remain hesitant
However, display panel exports declined 9.3 percent to 1.49 billion dollars. In contrast, mobile phone exports jumped 57 percent to 1.54 billion dollars due to strong demand for new models.
Exports of computers and peripherals surged 174.1 percent to 3.59 billion dollars, while communications equipment shipments fell 5.8 percent to 210 million dollars.
On the import side, ICT products rose 32.2 percent year-on-year to 16.15 billion dollars in March. Imports of chips, mobile phones and computers recorded double-digit growth, while display panels and communications equipment saw single-digit increases.
As a result, South Korea’s ICT sector posted a trade surplus of 27.36 billion dollars for the month.
7 days ago
AI use rises at workplaces, but many employees remain hesitant
The use of artificial intelligence (AI) at workplaces in the United States is growing, but many employees are still reluctant to rely on the technology, according to a new Gallup poll.
The survey shows that while more workers are now using AI tools regularly, concerns are also increasing about the risk of job losses. Many employees who avoid AI say they prefer traditional methods, have ethical concerns or are worried about data privacy.
The poll, conducted in February, highlights a mixed picture of how AI is changing workplaces. Some workers see it as a powerful tool that improves productivity and efficiency, while others fear its negative impact.
Scott Segal, a social worker in northern Virginia, said he uses AI to gather information to help elderly and vulnerable patients access healthcare services. However, he also fears that AI could eventually replace his role.
“I think people in jobs that can be replaced should start planning ahead,” said the 53-year-old.
The poll found that about 30% of employees use AI frequently, either daily or several times a week, while around 20% use it occasionally.
Around 40% of workers said their organisations have introduced AI tools to improve operations. Among them, nearly two-thirds reported that AI has had a positive impact on their productivity and efficiency.
Managers appear to benefit more from AI than other employees. About 70% of leaders who use AI at least a few times a year said it has improved their efficiency, compared to just over half of other workers.
Among employees who have access to AI but choose not to use it, 46% said they prefer to continue working in their usual way. Around 40% cited ethical concerns, data privacy issues, or doubts about AI’s usefulness. About a quarter said they had tried AI but found it unhelpful, while roughly 20% felt they lacked the skills to use it properly.
Thuy Pisone, a contract administrator in Maryland, said she uses AI for simple tasks but avoids it for work she can already handle confidently, such as preparing presentations.
The survey also found growing concern about job security. About 18% of US workers believe their jobs could be replaced by technology, automation or AI within the next five years, up from 15% in 2025.
Workers at companies already using AI are even more worried, with 23% saying job loss is at least somewhat likely in the near future.
Despite these concerns, most workers are not overly anxious. Around 70% said they are not very concerned or not concerned at all about losing their jobs to AI.
Segal said he is considering starting a healthcare support service if AI replaces his current role, as he believes some human-centred services will take longer to be automated.
For now, he said, he is even using AI tools to plan his financial future, including retirement savings.
9 days ago
China uses AI, social media to reshape global narrative, mock US
BEIJING, Apr 11 (AP/UNB) - China’s Communist Party, once known for rigid messaging, is increasingly using artificial intelligence and social media to shape global narratives, often targeting the United States and its leadership.
After tightly controlling the domestic internet through censorship, Beijing is now using AI-generated content to project its views abroad and counter what it calls Western bias.
In a recent example, Chinese state media released a five-minute AI-generated animation in a martial arts style depicting an allegory of a war in Iran. It shows a white eagle in royal attire representing the U.S., unleashing an evil laugh before its forces attack Persian cats symbolising Iranians, who vow to fight back after losing their leader and closing a key trade route.
Turbine dispute threatens major US offshore wind project
The video, rich in metaphor, is part of a series of AI animations mocking the U.S., including references to President Donald Trump’s comments on Greenland and U.S. dominance.
The trend reflects President Xi Jinping’s push to expand China’s global media influence and counter Western narratives. Similar AI-generated content has also been used by pro-Iran groups against the U.S.
Analysts say it reflects an intensifying global information war. A U.S. State Department cable warned such foreign campaigns pose a “direct threat” to national security.
Experts say AI “infotainment” is appealing to younger audiences globally. The video, released by China Central Television, went viral domestically and gained over one million views after being subtitled on X.
China has also built a vast social media “matrix” of diplomats, media and bots to amplify its messaging worldwide.
10 days ago
Russia’s tightening internet controls spark rising public anger
On a sunny weekend in central Moscow, dozens of people queued outside a presidential administration building, as police watched closely. They came to voice complaints over the government’s increasing restrictions on the internet, which have included frequent cellphone internet shutdowns, blocked messaging apps, and limited access to thousands of websites and digital services.
The moves have stirred growing frustration among Russians, affecting daily life, harming businesses, and drawing criticism even from some Kremlin supporters. Knowing that unauthorized protests are harshly suppressed, activists have focused on authorized rallies, putting up posters, and filing lawsuits, while business leaders have urged authorities to ease the measures.
Even Armenia’s Prime Minister Nikol Pashinyan took a subtle jab at Russia during a televised meeting with President Vladimir Putin on April 1, noting that social media in Armenia “is 100% free” without restrictions, prompting an unsmiling reaction from Putin.
The internet clampdown has disrupted digital life, making tasks like ordering taxis, paying bills, and staying in touch with family and friends difficult. Kremlin critic Boris Nadezhdin told AP, “This infuriates a huge number of people.”
A push for full control
Russia has long sought total control over the internet, blocking tens of thousands of websites, messaging apps, and social media platforms that refuse to cooperate. While users have turned to virtual private networks (VPNs) to bypass restrictions, authorities have also blocked many of these tools.
Last year, shutdowns escalated to include cellphone internet and sometimes broadband, leaving only government-approved sites and apps accessible. Officials claim the measures target Ukrainian drones
using Russian networks during the ongoing invasion, but ordinary citizens and businesses in areas unaffected by drones see them as harmful.
WhatsApp and Telegram, the country’s two most popular messaging apps, have faced repeated blocks, while a government-backed app, MAX, is being promoted—widely viewed as a surveillance tool. Voice and video calls were initially blocked, followed by messaging, which often now requires a VPN.
Lawyer Sarkis Darbinyan of digital rights group RKS Global said the government aims to confine users to a “digital ghetto” of Russian-controlled apps, adding, “The internet is no longer this universal digital good.”
Business voices concerns
Business leaders have called for moderation, highlighting the impact on daily life and commerce. Alexander Shokhin, head of the Russian Union of Industrialists and Entrepreneurs, told Putin that cellphone internet shutdowns “made life difficult for both businesses and citizens.” CEOs of major telecom operators also suggested targeted restrictions on suspicious users instead of broad shutdowns.
IT entrepreneur Natalya Kasperskaya criticized the blocking of VPNs for causing weekend outages in banking and other services, calling for dialogue between authorities and the IT sector.
Cautious activism
Activists across Russia have attempted rallies since late February, seeking authorization under strict protest laws. Many applications were rejected, and some organizers were arrested, but small pickets and poster campaigns have taken place.
Nadezhdin and other groups have applied to hold rallies on April 12, Cosmonautics Day, highlighting the link between science, technology, progress, and internet connectivity. “Public frustration is enormous,” he said, noting that people are willing to join authorized protests.
Moscow-based opposition politician Yulia Galyamina echoed the sentiment, saying public discontent over internet restrictions, especially Telegram, “is truly widespread” and growing.
12 days ago