tech
Hyundai, Kia to fix millions of vehicles in anti-theft settlement
Hyundai and Kia have agreed to provide free repairs for millions of vehicles under a nationwide settlement aimed at addressing weaknesses in anti-theft technology that left the cars vulnerable to theft.
The agreement, announced Tuesday by Minnesota Attorney General Keith Ellison, requires the automakers to repair all eligible vehicles at no cost to owners, a program that could exceed $500 million. The companies must also ensure that all future vehicles sold in the United States are equipped with engine immobilizers — a key anti-theft device — and pay up to $4.5 million in restitution to consumers whose vehicles were damaged by thieves.
The settlement involves 35 states, including California, New York, New Jersey and Pennsylvania. Vehicles eligible for the fix were sold between 2011 and 2022, with an estimated 9 million affected nationwide.
The case followed a sharp rise in Hyundai and Kia thefts after videos circulating on TikTok and other social media platforms beginning in 2021 showed how certain models could be stolen using simple tools such as a screwdriver and a USB cable. In Minneapolis alone, thefts of the two brands jumped by more than 800% from 2021 to 2022, prompting Ellison to launch an investigation in early 2023.
Humanoid robots draw attention at Silicon Valley summit amid lingering doubts
Ellison said the automakers had installed engine immobilizers on vehicles sold in Canada and Mexico but failed to do so broadly in the U.S., contributing to theft-related crimes, crashes and fatalities.
Under the settlement, Hyundai and Kia will install a zinc sleeve to prevent tampering with the ignition cylinder. Owners will have one year after receiving notice to obtain the repair at authorized dealerships, with fixes expected to be available from early 2026 through early 2027.
Both automakers said the agreement is part of broader efforts to improve vehicle security and support customers.
Source: AP
10 hours ago
Trump issues executive order to limit state AI regulations
US President Donald Trump signed an executive order Thursday aimed at curbing state-level regulations on artificial intelligence, arguing that patchwork rules across the U.S. could slow innovation and let China gain an edge in AI development.
Currently, four states — California, Colorado, Texas, and Utah — have passed laws requiring transparency from companies, limiting certain data collection, and addressing AI risks such as discrimination in hiring, lending, and healthcare decisions. Some states also regulate AI’s use in elections and for nonconsensual content.
Trump’s order directs federal agencies to identify burdensome state AI regulations and pressure states not to adopt new rules, including by threatening to withhold federal funding or challenging the laws in court. It also calls for a federal framework to preempt state AI laws, though it excludes some protections, such as child safety measures and state government AI use.
Critics, including consumer rights groups and civil liberties advocates, say the order benefits big tech by eliminating state oversight. “Big Tech has successfully leveraged those around the president to pass a federal moratorium that aims to wipe out bipartisan AI safeguards,” said Liana Keesing of Issue One. Children’s advocacy groups also warned the order could put younger generations at risk in an AI-driven world.
Legal challenges are expected. Colorado Attorney General Phil Weiser and California state lawmakers have signaled they will sue if the order is enforced, and Connecticut leaders plan to continue advancing state AI regulations. Observers note that the order may overstep presidential authority in preempting state laws.
1 day ago
Militant groups experimenting with AI as risks rise
While the world races to leverage artificial intelligence, militant groups are also exploring the technology, even if their exact objectives remain unclear.
US national security experts and intelligence agencies warn that extremist organizations could use AI to recruit members, produce realistic deepfake content, and enhance cyberattacks.
A user on a pro-Islamic State website last month encouraged supporters to incorporate AI into their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English.
“Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued. “So make their nightmares into reality.”
Though IS no longer controls territory in Iraq and Syria, the group operates as a decentralized network sharing a violent ideology. Experts say its early recognition of social media’s power for recruitment and disinformation makes its interest in AI unsurprising.
For loosely organized, under-resourced extremist groups—or even a single individual with internet access—AI can mass-produce propaganda or deepfakes, amplifying influence.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, former NSA vulnerability researcher and CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn't have a lot of money is still able to make an impact.”
How extremists are using AI
Since programs like ChatGPT became widely available, militant groups have experimented with AI to generate realistic photos and videos. Combined with social media algorithms, such content can attract new recruits, intimidate opponents, and spread propaganda on an unprecedented scale.
Two years ago, extremist groups circulated fabricated images of the Israel-Hamas war showing bloodied, abandoned children in destroyed buildings. The images fueled outrage and polarization while obscuring the actual horrors of the conflict. Similar tactics were used by violent groups in the Middle East and antisemitic organizations abroad.
Following a concert attack in Russia last year that killed nearly 140 people, AI-generated propaganda videos were widely shared online to recruit supporters.
IS has also created deepfake audio of its leaders reciting scripture and used AI to rapidly translate messages into multiple languages, according to SITE Intelligence Group, which monitors extremist activity.
‘Aspirational’ for now
Experts say these groups still lag behind state actors like China, Russia, or Iran and consider advanced uses of AI “aspirational.”
But Marcus Fowler, former CIA agent and CEO of Darktrace Federal, warned that the risks are growing as accessible AI tools expand. Hackers already use synthetic audio and video for phishing, impersonating officials to access sensitive networks. AI can also automate cyberattacks and generate malicious code.
A greater concern is that extremists could attempt to employ AI in developing biological or chemical weapons, compensating for technical gaps, a risk highlighted in the Department of Homeland Security’s recent Homeland Threat Assessment.
“ISIS got on Twitter early and found ways to use social media to their advantage,” Fowler said. “They are always looking for the next thing to add to their arsenal.”
Efforts to counter the threat
Lawmakers are pushing measures to address these dangers.
Sen. Mark Warner of Virginia, top Democrat on the Senate Intelligence Committee, said AI developers should be able to share information about malicious uses by extremists, hackers, or foreign spies.
“It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors,” Warner said.
House lawmakers recently learned that IS and al-Qaida have held AI training workshops for their supporters.
Legislation passed by the U.S. House last month requires homeland security officials to assess AI threats from extremist groups annually.
Guarding against AI misuse, Rep. August Pfluger, R-Texas, said, is similar to preparing for conventional attacks.
“Our policies and capabilities must keep pace with the threats of tomorrow,” he said.
2 days ago
Militant groups experimenting with AI, raising security concerns
As artificial intelligence (AI) spreads globally, militant groups are experimenting with the technology, raising concerns among national security experts. Extremist organizations could use AI to recruit followers, produce realistic deepfakes, and refine cyberattacks.
A recent post on a pro-Islamic State (IS) forum encouraged supporters to integrate AI into operations, highlighting its ease of use. IS, once a territorial force in Iraq and Syria and now a decentralized network, has long exploited social media for recruitment and propaganda. Experts warn AI allows even small, poorly resourced groups to amplify their influence.
Pakistani forces kill 7 insurgents in raid on militant hideout in Balochistan
Researchers say extremist groups have created AI-generated photos and videos depicting conflict scenarios to recruit members and spread disinformation. AI is also used to produce deepfake audio of leaders and translate messages rapidly into multiple languages.
While sophisticated AI use remains “aspirational,” officials caution the risks are growing. Hackers are already using synthetic media for phishing and cyberattacks, and militant groups could potentially pursue AI-assisted chemical or biological weapons.
U.S. lawmakers are calling for urgent measures, including better information sharing among AI developers and annual assessments of AI threats by extremist organizations.
Source: AP
2 days ago
Humanoid robots draw attention at Silicon Valley summit amid lingering doubts
Once viewed as an unattractive investment due to high costs and complexity, humanoid robots are again in the spotlight as advances in artificial intelligence revive ambitions to create machines that move and work like humans.
That renewed interest was on display at the Humanoids Summit in Mountain View, where more than 2,000 attendees, including engineers from Disney, Google and numerous startups, gathered to demonstrate emerging technologies and discuss how to speed up development. Summit founder and venture capitalist Modar Alaoui said many researchers now believe humanoid robots, or other physical forms of AI, could eventually become commonplace, though the timeline remains uncertain.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Despite the enthusiasm, skepticism was widespread. Experts cautioned that major technical challenges remain before robots can serve as reliable workers in homes or offices. Cosima du Pasquier, founder of Haptica Robotics, said significant research gaps still need to be addressed, particularly in areas such as touch and dexterity.
China currently leads the sector, backed by government incentives and a national push to build a humanoid robotics ecosystem by 2025, according to McKinsey & Company. Chinese-made robots dominated displays at the summit, while U.S. firms are benefiting from advances in generative AI that help robots better understand and navigate their environments.
Even so, veteran roboticists warn that fully capable humanoid robots are still a distant goal, with questions remaining over whether current investments will deliver the promised breakthroughs.
Source: AP
3 days ago
South Africa relaxes affirmative action rules, clearing path for Starlink after Musk criticism
South Africa has adjusted its communications licensing rules to allow Elon Musk’s Starlink and other foreign-owned satellite internet companies to operate without transferring a 30% ownership stake to Black or other non-white South Africans.
Under the revised policy, announced Friday by Communications Minister Solly Malatsi, foreign firms seeking licenses in the communications sector can meet affirmative action requirements through alternative “equity equivalent” measures. These may include investments in skills development, training programs, or other initiatives designed to support historically disadvantaged communities, rather than direct shareholding.
Similar provisions already exist for foreign companies operating in other industries across South Africa.
Musk, who was born in South Africa, has previously criticized the country’s ownership rules, calling them “openly racist.” Earlier this year, he claimed on social media that Starlink was barred from operating in the country because he is not Black. Former U.S. President Donald Trump has also condemned South Africa’s affirmative action framework, portraying it as discriminatory against white people.
The regulations stem from South Africa’s Broad-Based Black Economic Empowerment policy, a key post-apartheid initiative intended to address decades of racial inequality under white minority rule. While the policy remains central to the government’s transformation agenda, critics argue it discourages foreign investment.
Starlink, a subsidiary of SpaceX, already provides low-Earth orbit satellite internet services in more than a dozen African nations, including several that border South Africa.
Minister Malatsi said the updated policy could help expand fast and reliable internet access, particularly in rural and underserved parts of the country, where connectivity remains limited.
4 days ago
Australia implements world-first social media ban for children under 16
Australia on Wednesday launched a landmark social media ban for children under 16, with Prime Minister Anthony Albanese hailing it as a step to give families control over tech giants and protect young users.
The new law affects major platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, Reddit, Threads, Kick, and Twitch. Companies face fines of up to 49.5 million Australian dollars ($32.9 million) if they fail to remove accounts of underage users. The ban is enforced by eSafety Commissioner Julie Inman Grant, who said platforms already have the data and technology to comply. Notices will be sent to the companies Thursday, and preliminary compliance results will be reported by Christmas.
The measure has drawn mixed reactions. Many children posted farewell messages, while some tried to bypass age restrictions using face-altering tricks or VPNs. Communications Minister Anika Wells warned that attempts to evade detection would eventually fail, as platforms are required to routinely monitor accounts.
Albanese acknowledged implementation challenges, saying the law “won’t be perfect” but emphasized social responsibility for tech firms. Supporters cited online dangers as a key motivation, including the death of Mac Holdsworth, a sextortion victim, which inspired his father to advocate for age restrictions.
Young advocates like 12-year-old Flossie Brodribb praised the ban for promoting safer, healthier childhoods, while some families in the entertainment industry raised concerns about its impact on social media-based careers.
Privacy safeguards are included in the law. Platforms may use existing data, age-estimation technology, or third-party verification, but cannot compel users to submit government ID or use collected information for secondary purposes without consent, according to Privacy Commissioner Carly Kind.
Albanese and reform supporters framed the ban as a global example, signaling that Australia’s approach could inspire similar measures worldwide.
6 days ago
Social media ban for children under 16 starts in Australia
Australia has implemented a world-first law banning children under 16 from accessing social media platforms, a move Prime Minister Anthony Albanese described as empowering families and curbing the influence of tech giants.
The ban, effective Wednesday, affects platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, YouTube, X, Threads, Kick, and Twitch. Companies failing to comply face fines of up to 49.5 million Australian dollars ($32.9 million). Parents reported some children were upset upon being locked out, and a few attempted to bypass age restrictions using virtual private networks or facial modifications.
The law will be enforced by eSafety Commissioner Julie Inman Grant, who said platforms already have the data and technology to implement the rules. Notices will be sent Thursday requiring details on account closures and age verification, with public updates expected before Christmas.
Albanese acknowledged the rollout would be challenging and “won’t be perfect,” emphasizing social responsibility for tech companies. Communications Minister Anika Wells said over 200,000 TikTok accounts had already been deactivated, warning children trying to evade detection would eventually be caught.
Australia to proceed with under-16 social media ban despite court challenge
Advocates hailed the move as a vital step for child safety online. Wayne Holdsworth, whose son died in an online sextortion scam, called the law “a start” to protect children. Twelve-year-old Flossie Brodribb said the ban would help kids grow up “healthier, safer and more connected to the real world.”
Some families, however, warned of financial impacts. Simone Clements said the law affects her 15-year-old twins, who rely on social media for their careers as actors, models, and influencers.
Source: AP
7 days ago
Google facing new EU antitrust probe over content used for AI
Google is facing fresh antitrust scrutiny in Europe as EU regulators on Tuesday opened a new investigation into the company’s use of online content to develop its artificial intelligence models and services.
The European Commission, the bloc’s top competition watchdog, is examining whether Google violated EU rules by using content from web publishers and YouTube uploads for AI purposes without compensating creators or allowing them to opt out. Regulators are particularly concerned about two services — AI Overviews, which produces automated summaries at the top of search results, and AI Mode, which provides chatbot-style responses.
The probe will also assess whether Google uses YouTube videos under similar terms to train its generative AI models while restricting access for rival developers.
Officials said they aim to determine whether Google gave itself an unfair competitive edge through restrictive conditions or privileged access to content.
Google said the complaint “risks stifling innovation” and vowed to continue working with news and creative industries as they transition into the AI era.
The investigation falls under the EU’s traditional competition rules, not the newer Digital Markets Act designed to curb Big Tech dominance.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
EU competition chief Teresa Ribera said AI innovation must not undermine core societal principles.
Last week, the Commission launched a separate antitrust probe into WhatsApp’s AI policy and fined Elon Musk's platform X €120 million for digital rule violations, prompting criticism from Trump administration officials.
The EU is “agnostic” about company nationality and focuses solely on potential anti-competitive behavior, spokeswoman Arianna Podesta said.
Google will be able to respond to the concerns, and U.S. authorities have been notified. The case has no deadline and could lead to fines of up to 10% of Google’s global annual revenue.
Source: AP
7 days ago
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Microsoft on Tuesday announced its largest-ever investment in Asia, pledging $17.5 billion over the next four years to expand India’s cloud computing and artificial intelligence infrastructure.
CEO Satya Nadella revealed the plan on X following a meeting with Indian Prime Minister Narendra Modi in New Delhi. He said the investment aims to help India develop “infrastructure, skills, and sovereign capabilities” to support its AI ambitions.
The announcement highlights intensifying global competition among tech giants in India, one of the world’s fastest-growing digital markets. In October, Google committed $15 billion to establish its first AI hub in Visakhapatnam.
Massachusetts court reviews lawsuit accusing Meta of making Facebook and Instagram addictive for minors
Nadella’s three-day India visit includes policy discussions and participation in AI-focused events in Bengaluru and Mumbai. The government has set ambitious targets to become a global AI and semiconductor hub, offering incentives to attract multinational technology firms.
Microsoft, which has been in India for over three decades and employs more than 22,000 people, plans to scale up cloud and data center operations nationwide, including a new hyperscale data center expected to go live by mid-2026.
Source: AP
7 days ago