Crypto Market Ticker
Loading...

Big Tech

Meta offers EU users ad-light option in push to end investigation

خلاصہ: Meta offers EU users ad-light option in push to end investigationMeta has agreed to make changes to its “pay or consent” business model in the EU, seeking to agree to a deal that avoids further regulatory fines at a time when the bloc’s digital rule book is drawing anger from US authorities. On Tuesday, the European Commission announced that the social media giant had offered users an alternative choice of Facebook and Instagram services that would show them fewer personalized advertisements. The offer follows an EU investigation into Meta’s policy of requiring users either to consent to data tracking or pay for an ad-free service. The Financial Times reported on optimism that an agreement could be reached between the parties in October. Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Google’s Sundar Pichai warns of “irrationality” in trillion-dollar AI investment boom

خلاصہ: Google’s Sundar Pichai warns of “irrationality” in trillion-dollar AI investment boomOn Tuesday, Alphabet CEO Sundar Pichai warned of “irrationality” in the AI market, telling the BBC in an interview, “I think no company is going to be immune, including us.” His comments arrive as scrutiny over the state of the AI market has reached new heights, with Alphabet shares doubling in value over seven months to reach a $3.5 trillion market capitalization. Speaking exclusively to the BBC at Google’s California headquarters, Pichai acknowledged that while AI investment growth is at an “extraordinary moment,” the industry can “overshoot” in investment cycles, as we’re seeing now. He drew comparisons to the late 1990s Internet boom, which saw early Internet company valuations surge before collapsing in 2000, leading to bankruptcies and job losses. “We can look back at the Internet right now. There was clearly a lot of excess investment, but none of us would question whether the Internet was profound,” Pichai said. “I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Oracle hit hard in Wall Street’s tech sell-off over its huge AI bet

خلاصہ: Oracle hit hard in Wall Street’s tech sell-off over its huge AI betOracle has been hit harder than Big Tech rivals in the recent sell-off of tech stocks and bonds, as its vast borrowing to fund a pivot to artificial intelligence unnerved Wall Street. The US software group founded by Larry Ellison has made a dramatic entrance to the AI race, committing to spend hundreds of billions of dollars in the next few years on chips and data centers—largely as part of deals to supply computing capacity to OpenAI, the maker of ChatGPT. The speed and scale of its moves have unsettled some investors at a time when markets are keenly focused on the spending of so-called hyperscalers—big tech companies building vast data centers.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules

خلاصہ: Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rulesEm dashes have become what many believe to be a telltale sign of AI-generated text over the past few years. The punctuation mark appears frequently in outputs from ChatGPT and other AI chatbots, sometimes to the point where readers believe they can identify AI writing by its overuse alone—although people can overuse it, too. On Thursday evening, OpenAI CEO Sam Altman posted on X that ChatGPT has started following custom instructions to avoid using em dashes. “Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it’s supposed to do!” he wrote. The post, which came two days after the release of OpenAI’s new GPT-5.1 AI model, received mixed reactions from users who have struggled for years with getting the chatbot to follow specific formatting preferences. And this “small win” raises a very big question: If the world’s most valuable AI company has struggled with controlling something as simple as punctuation use after years of trying, perhaps what people call artificial general intelligence (AGI) is farther off than some in the industry claim.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous

خلاصہ: Researchers question Anthropic claim that AI-assisted attack was 90% autonomousResearchers from Anthropic said they recently observed the “first reported AI-orchestrated cyber espionage campaign” after detecting China-state hackers using the company’s Claude AI tool in a campaign targeting dozens of targets. Outside researchers are much more measured in describing the significance of the discovery. Anthropic published the reports on Thursday here and here. In September, the reports said, Anthropic discovered a “highly sophisticated espionage campaign,” carried out by a Chinese state-sponsored group, that used Claude Code to automate up to 90 percent of the work. Human intervention was required “only sporadically (perhaps 4-6 critical decision points per hacking campaign).” Anthropic said the hackers had employed AI agentic capabilities to an “unprecedented” extent. “This campaign has substantial implications for cybersecurity in the age of AI ‘agents’—systems that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention,” Anthropic said. “Agents are valuable for everyday work and productivity—but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks.”Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Meta’s star AI scientist Yann LeCun plans to leave for own startup

خلاصہ: Meta’s star AI scientist Yann LeCun plans to leave for own startupMeta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported. The French-US scientist has reportedly told associates he will depart in the coming months and is already in early talks to raise funds for the new venture. The departure comes as CEO Mark Zuckerberg radically overhauled Meta’s AI operations after deciding the company had fallen behind rivals such as OpenAI and Google. World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone. Unlike current large language models (such as the kind that power ChatGPT) that predict the next segment of data in a sequence, world models would ideally simulate cause-and-effect scenarios, understand physics, and enable machines to reason and plan more like animals do. LeCun has said this architecture could take a decade to fully develop. While some AI experts believe that Transformer-based AI models—such as large language models, video synthesis models, and interactive world synthesis models—have emergently modeled physics or absorbed the structural rules of the physical world from training data examples, the evidence so far generally points to sophisticated pattern-matching rather than a base understanding of how the physical world actually works.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

ClickFix may be the biggest security threat your family has never heard of

خلاصہ: ClickFix may be the biggest security threat your family has never heard ofOver the past year, scammers have ramped up a new way to infect the computers of unsuspecting people. The increasingly common method, which many potential targets have yet to learn of, is quick, bypasses most endpoint protections, and works against both macOS and Windows users. ClickFix often starts with an email sent from a hotel that the target has a pending registration with and references the correct registration information. In other cases, ClickFix attacks begin with a WhatsApp message. In still other cases, the user receives the URL at the top of Google results for a search query. Once the mark accesses the malicious site referenced, it presents a CAPTCHA challenge or other pretext requiring user confirmation. The user receives an instruction to copy a string of text, open a terminal window, paste it in, and press Enter. One line is all it takes Once entered, the string of text causes the PC or Mac to surreptitiously visit a scammer-controlled server and download malware. Then, the machine automatically installs it—all with no indication to the target. With that, users are infected, usually with credential-stealing malware. Security firms say ClickFix campaigns have run rampant. The lack of awareness of the technique, combined with the links also coming from known addresses or in search results, and the ability to bypass some endpoint protections are all factors driving the growth.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Researchers isolate memorization from reasoning in AI neural networks

خلاصہ: Researchers isolate memorization from reasoning in AI neural networksWhen engineers build AI language models like GPT-5 from training data, at least two major processing features emerge: memorization (reciting exact text they’ve seen before, like famous quotes or passages from books) and reasoning (solving new problems using general principles). New research from AI startup Goodfire.ai provides the first potentially clear evidence that these different functions actually work through completely separate neural pathways in the model’s architecture. The researchers discovered that this separation proves remarkably clean. In a preprint paper released in late October, they described that when they removed the memorization pathways, models lost 97 percent of their ability to recite training data verbatim but kept nearly all their “logical reasoning” ability intact. For example, at layer 22 in Allen Institute for AI’s OLMo-7B language model, the bottom 50 percent of weight components showed 23 percent higher activation on memorized data, while the top 10 percent showed 26 percent higher activation on general, non-memorized text. This mechanistic split enabled the researchers to surgically remove memorization while preserving other capabilities.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Researchers surprised that with AI, toxicity is harder to fake than intelligence

خلاصہ: Researchers surprised that with AI, toxicity is harder to fake than intelligenceThe next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd. On Wednesday, researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University released a study revealing that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. The research, which tested nine open-weight models across Twitter/X, Bluesky, and Reddit, found that classifiers developed by the researchers detected AI-generated replies with 70 to 80 percent accuracy. The study introduces what the authors call a “computational Turing test” to assess how closely AI models approximate human language. Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Wipers from Russia’s most cut-throat hackers rain destruction on Ukraine

خلاصہ: Wipers from Russia’s most cut-throat hackers rain destruction on UkraineOne of the world’s most ruthless and advanced hacking groups, the Russian state-controlled Sandworm, launched a series of destructive cyberattacks in the country’s ongoing war against neighboring Ukraine, researchers reported Thursday. In April, the group targeted a Ukrainian university with two wipers, a form of malware that aims to permanently destroy sensitive data and often the infrastructure storing it. One wiper, tracked under the name Sting, targeted fleets of Windows computers by scheduling a task named DavaniGulyashaSdeshka, a phrase derived from Russian slang that loosely translates to “eat some goulash,” researchers from ESET said. The other wiper is tracked as Zerlot. A not-so-common target Then, in June and September, Sandworm unleashed multiple wiper variants against a host of Ukrainian critical infrastructure targets, including organizations active in government, energy, and logistics. The targets have long been in the crosshairs of Russian hackers. There was, however, a fourth, less common target—organizations in Ukraine’s grain industry.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Google plans secret AI military outpost on tiny island overrun by crabs

خلاصہ: Google plans secret AI military outpost on tiny island overrun by crabsOn Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia’s military. The previously undisclosed project will reportedly position advanced AI infrastructure a mere 220 miles south of Indonesia at a location military strategists consider critical for monitoring Chinese naval activity. Aside from its strategic military position, the island is famous for its massive annual crab migration, where over 100 million of red crabs make their way across the island to spawn in the ocean. That’s notable because the tech giant has applied for environmental approvals to build a subsea cable connecting the 135-square-kilometer island to Darwin, where US Marines are stationed for six months each year. The project follows a three-year cloud agreement Google signed with Australia’s military in July 2025, but many details about the new facility’s size, cost, and specific capabilities remain “secret,” according to Reuters. Both Google and Australia’s Department of Defense declined to comment when contacted by the news agency.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

5 AI-developed malware families analyzed by Google fail to work and are easily detected

خلاصہ: 5 AI-developed malware families analyzed by Google fail to work and are easily detectedGoogle on Wednesday revealed five recent malware samples that were built using generative AI. The end results of each one were far below par with professional malware development, a finding that shows that vibe coding of malicious wares lags behind more traditional forms of development, which means it still has a long way to go before it poses a real-world threat. One of the samples, for instance, tracked under the name PromptLock, was part of an academic study analyzing how effective the use of large language models can be “to autonomously plan, adapt, and execute the ransomware attack lifecycle.” The researchers, however, reported the malware had “clear limitations: it omits persistence, lateral movement, and advanced evasion tactics” and served as little more than a demonstration of the feasibility of AI for such purposes. Prior to the paper’s release, security firm ESET said it had discovered the sample and hailed it as “the first AI-powered ransomware.” Don’t believe the hype Like the other four samples Google analyzed—FruitShell, PromptFlux, PromptSteal, and QuietVault—PromptLock was easy to detect, even by less-sophisticated endpoint protections that rely on static signatures. All samples also employed previously seen methods in malware samples, making them easy to counteract. They also had no operational impact, meaning they didn’t require defenders to adopt new defenses.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

OpenAI signs massive AI compute deal with Amazon

خلاصہ: OpenAI signs massive AI compute deal with AmazonOn Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft. The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.” OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Two Windows vulnerabilities, one a 0-day, are under active exploitation

خلاصہ: Two Windows vulnerabilities, one a 0-day, are under active exploitationTwo Windows vulnerabilities—one a zero-day that has been known to attackers since 2017 and the other a critical flaw that Microsoft initially tried and failed to patch recently—are under active exploitation in widespread attacks targeting a swath of the Internet, researchers say. The zero-day went undiscovered until March, when security firm Trend Micro said it had been under active exploitation since 2017, by as many as 11 separate advanced persistent threats (APTs). These APT groups, often with ties to nation-states, relentlessly attack specific individuals or groups of interest. Trend Micro went on to say that the groups were exploiting the vulnerability, then tracked as ZDI-CAN-25373, to install various known post-exploitation payloads on infrastructure located in nearly 60 countries, with the US, Canada, Russia, and Korea being the most common. A large-scale, coordinated operation Seven months later, Microsoft still hasn’t patched the vulnerability, which stems from a bug in the Windows Shortcut binary format. The Windows component makes opening apps or accessing files easier and faster by allowing a single binary file to invoke them without having to navigate to their locations. In recent months, the ZDI-CAN-25373 tracking designation has been changed to CVE-2025-9491.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses

خلاصہ: ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly lossesOn Tuesday, OpenAI CEO Sam Altman told Reuters during a livestream that going public “is the most likely path for us, given the capital needs that we’ll have.” Now sources familiar with the matter say the ChatGPT maker is preparing for an initial public offering that could value the company at up to $1 trillion, with filings possible as early as the second half of 2026. However, news of the potential IPO comes as the company faces mounting losses that may have reached as much as $11.5 billion in the most recent quarter, according to one estimate. Going public could give OpenAI more efficient access to capital and enable larger acquisitions using public stock, helping finance Altman’s plans to spend trillions of dollars on AI infrastructure, according to people familiar with the company’s thinking who spoke with Reuters. Chief Financial Officer Sarah Friar has reportedly told some associates the company targets a 2027 IPO listing, while some financial advisors predict 2026 could be possible. Three people with knowledge of the plans told Reuters that OpenAI has discussed raising $60 billion at the low end in preliminary talks. That figure refers to how much money the company would raise by selling shares to investors, not the total worth of the company. If OpenAI sold that amount of stock while keeping most shares private, the entire company could be valued at $1 trillion or more. The final figures and timing will likely change based on business growth and market conditions.Read full article CommentsSource InformationPublisher: Ars TechnicaOriginal Source: Read more

Recent Articles