Sci-Tech
Apple pulls US immigration official tracking apps
Apple has pulled apps that let users flag sightings of officers from US Immigration and Customs Enforcement (ICE).
The tech giant told the BBC it had removed ICEBlock from its App Store after law enforcement made it aware of “safety risks” associated with it and “similar apps”.
According to a statement sent to Fox News Digital, US Attorney General Pam Bondi had “demanded” the app’s removal saying it was “designed to put ICE officers at risk”.
The app’s creator said such claims were “patently false” and accused Apple of “capitulating to an authoritarian regime.”
ICEBlock is among a number of apps released this year in response to President Trump’s crackdown on illegal immigration and an upsurge in ICE raids.
Critics – such as the creator of ICEBlock – accuse the government of abusing its powers and “bringing terror” to US streets.
The free app works by showing the movements of immigration officers. It has been downloaded more than a million times in the US.
However, Bondi argued it was being used to target ICE officers, with the FBI saying the man who targeted an ICE facility in Dallas in September – killing two detainees – had used similar apps to track the movements of agents and their vehicles
In a statement Apple said: “We created the App Store to be a safe and trusted place to discover apps.
“Based on information we’ve received from law enforcement about the safety risks associated with ICEBlock, we have removed it and similar apps from the App Store.”
But its creator, Joshua Aaron, denied it posed a threat.
“ICEBlock is no different from crowd sourcing speed traps, which every notable mapping application, including Apple’s own Maps app,” he said.
“This is protected speech under the first amendment of the United States Constitution.”
Mr Aaron – who has worked in the tech industry for years – previously told BBC Verify he developed the app out of concern over a spike in immigration raids.
“I certainly watched pretty closely during Trump’s first administration and then I listened to the rhetoric during the campaign for the second,” he said.
“My brain started firing on what was going to happen and what I could do to keep people safe.”
The White House and FBI had criticised the app after it launched in April and downloads rose.
Sci-Tech
Japan faces Asahi beer shortage after cyber-attack
Japan is facing a shortage of Asahi products, including beer and bottled tea, as the drinks giant grapples with the impact of a major cyber-attack that has impacted its operations in the country.
Most of the Asahi Group’s factories in Japan have been at a standstill since Monday, after the attack hit its ordering and delivering system, the firm has said.
Major Japanese retailers, including FamilyMart and Lawson, have now warned customers to expect shortages of Asahi products.
The BBC has contacted Asahi for comment.
Asahi has temporarily suspended orders and shipments of its products with “no prospect of resumption”, FamilyMart said in a statement on Thursday.
The firm – which is one of Japan’s largest convenience stores – said its Famimaru range of bottled teas, which are made by Asahi, are expected to be in short supply or out of stock.
“We sincerely apologise to our customers for any inconvenience caused,” said FamilyMart, adding that it is working with Asahi to resume the sale of the products.
Lawson, another major Japanese retailer, also said it expected some Asahi products to be in short supply from today.
The retailer said it plans to stock up on alternative products to minimise the impact on customers.
Japanese supermarket chain, Life Cooperation, also warned that Asahi products may soon go out of stock.
Asahi is the biggest brewer in Japan and also owns Fullers in the UK and global brands including Peroni, Pilsner Urquell and Grolsch.
Japan accounts for about half of its total sales.
Asahi is best known for its Super Dry beer and also makes soft drinks and food products, as well as supplying own-brand goods to retailers like FamilyMart.
Asahi said earlier this week that the system failure is limited to its domestic operations. It also said there had been no “confirmed leakage of personal information of customer data”.
“We are actively investigating the cause and working to restore operations; however there is currently no estimated timeline for recovery,” it said at the time.
Sci-Tech
TikTok ‘recommends sexual content and porn to children’, says report
Angus CrawfordBBC News Investigations
Getty ImagesTikTok’s algorithm recommends pornography and highly sexualised content to children’s accounts, according to a new report by a human rights campaign group.
Researchers created fake child accounts and activated safety settings but still received sexually explicit search suggestions.
The suggested search terms led to sexualised material including explicit videos of penetrative sex.
The platform says it is committed to safe and age-appropriate experiences and took immediate action once it knew of the problem.
In late July and early August this year, researchers from campaign group Global Witness set up four accounts on TikTok pretending to be 13-year-olds.
They used false dates of birth and were not asked to provide any other information to confirm their identities.
Pornography
They also turned on the platform’s “restricted mode”, which TikTok says prevents users seeing “mature or complex themes, such as… sexually suggestive content”.
Without doing any searches themselves, investigators found overtly sexualised search terms being recommended in the “you may like” section of the app.
Those search terms led to content of women simulating masturbation.
Other videos showed women flashing their underwear in public places or exposing their breasts.
At its most extreme, the content included explicit pornographic films of penetrative sex.
These videos were embedded in other innocent content in a successful attempt to avoid content moderation.
Ava Lee from Global Witness said the findings came as a “huge shock” to researchers.
“TikTok isn’t just failing to prevent children from accessing inappropriate content – it’s suggesting it to them as soon as they create an account”.
Global Witness is a campaign group which usually investigates how big tech affects discussions about human rights, democracy and climate change.
Researchers stumbled on this problem while conducting other research in April this year.
Videos removed
They informed TikTok, which said it had taken immediate action to resolve the problem.
But in late July and August this year, the campaign group repeated the exercise and found once again that the app was recommending sexual content.
TikTok says that it has more than 50 features designed to keep teens safe: “We are fully committed to providing safe and age-appropriate experiences”.
The app says it removes nine out of 10 videos that violate its guidelines before they are ever viewed.
When informed by Global Witness of its findings, TikTok says it took action to “remove content that violated our policies and launch improvements to our search suggestion feature”.
Children’s Codes
On 25 July this year, the Online Safety Act’s Children’s Codes came into force, imposing a legal duty to protect children online.
Platforms now have to use “highly effective age assurance” to stop children from seeing pornography. They must also adjust their algorithms to block content which encourages self-harm, suicide or eating disorders.
Global Witness carried out its second research project after the Children’s Codes came into force.
Ava Lee from Global Witness said: “Everyone agrees that we should keep children safe online… Now it’s time for regulators to step in.”
During their work, researchers also observed the reaction of other users to the sexualised search terms they were being recommended.
One commenter wrote: “can someone explain to me what is up w my search recs pls?”
Another asked: “what’s wrong with this app?”

Sci-Tech
The people turning to AI for dating and relationship advice
Suzanne BearneTechnology Reporter
Getty ImagesEarlier this year, Rachel wanted to clear the air with a man she had been dating before seeing him again in a wider friendship group setting.
“I’d used ChatGPT for job searching but had heard someone else use it [for dating advice],” says Rachel, who does not want her real name used, and lives in Sheffield.
“I was feeling quite distressed and wanted guidance, and didn’t want friends involved.”
Before the phone call, she turned to ChatGPT for help. “I asked, how do I deal with this conversation but not be on the defensive.”
Its response?
“ChatGPT does this all the time but it was something like ‘wow, that’s such a self-aware question, you must be emotionally mature going through this. Here are some tips’. It was like a cheerleader on my side, like I was right and he was wrong.”
Overall, she says it was “useful” but described the language as “very much like therapy speak, using words like ‘boundaries’”.
“All I took from it was it reminded me to be OK to do it on my terms, but I didn’t take it too literally.”
Rachel is not alone in turning to AI for advice in dealing with relationships.
According to research by the online dating firm Match, almost half of Generation Z Americans (those born between 1997 and 2012) said they have used LLMs like ChatGPT for dating advice, that’s more than any other generation.
People are turning to AI to help craft breakup messages, to dissect conversations they’re having with people they’re dating, and to resolve problems in relationships.
Anastasia JobsonDr Lalitaa Suglani, psychologist and relationship expert, says AI can be a useful tool, especially for people who feel overwhelmed or unsure when it comes to communication in relationships.
It may help them to craft a text, process a confusing message or source a second opinion, which can offer a moment of pause instead of being reactive, she says.
“In many ways it can function like a journalling prompt or reflective space, which can be supportive when used as a tool and not a replacement for connection,” says Dr Suglani.
However, she flags several concerns.
“LLMs are trained to be helpful and agreeable and repeat back what you are sharing, so they may subtly validate dysfunctional patterns or echo back assumptions, especially if the prompt is biased and the problem with this it can reinforce distorted narratives or avoidance tendencies.”
For example, she says, using AI to write a breakup text might be a way to avoid the discomfort of the situation. That might contribute to avoidant behaviours, as the individual is not sitting with how they actually feel.
Using AI might also inhibit their own development.
“If someone turns to an LLM every time they’re unsure how to respond or feel emotionally exposed, they might start outsourcing their intuition, emotional language, and sense of relational self,” says Dr Suglani.
She also notes that AI messages can be emotionally sterile and make communication feel scripted, which can be unnerving to receive.
Es LeeDespite the challenges, services are springing up to serve the market for relationship advice.
Mei is a free AI generated service. Trained using Open AI, the service responds to relationship dilemmas with conversational-like responses.
“The idea is to allow people to instantly seek help to navigate relationships because not everyone can talk to friends or family for fear of judgment,” says New York-based founder Es Lee.
He says more than half of the issues brought up on the AI tool concern sex, a subject that many may not wish to discuss with friends or a therapist, Mr Lee says.
“People are only using AI as existing services are lacking,” he says.
Another common use is how to reword a message or how to fix an issue in a relationship. “It’s like people need AI to validate it [the problem].”
When giving relationship advice, issues of safety could come up. A human counsellor would know when to intervene and protect a client from a potentially harmful situation.
Would a relationship app provide the same guardrails?
Mr Lee recognises the concern over safety. “I think the stakes are higher with AI because it can connect with us on a personal level the way no other technology has.”
But he says Mei has “guardrails” built into the AI.
“We welcome professionals and organisations to partner with us and take an active role in molding our AI products,” he says.
OpenAI the creator of ChatGPT says that its latest model has shown improvements in areas like avoiding unhealthy levels of emotional reliance and sycophancy.
In a statement the company said:
“People sometimes turn to ChatGPT in sensitive moments, so we want to make sure it responds appropriately, guided by experts. This includes directing people to professional help when appropriate, strengthening our safeguards in how our models respond to sensitive requests and nudging for breaks during long sessions.”
Another area of concern is privacy. Such apps could potentially collect very sensitive data, which could be devastating if exposed by hackers.
Mr Lee says “at every fork in the road on how we handle user privacy, we choose the one that preserves privacy and collects only what we need to provide the best service.”
As part of that policy, he says that Mei does not ask for information that would identify an individual, other than an email address.
Mr Lee also says conversations are saved temporarily for quality assurance but discarded after 30 days. “They are not currently saved permanently to any database.”
Some people are using AI in combination with a human therapist.
When Corinne (not her real name) was looking to end a relationship late last year, she started to turn to ChatGPT for advice on how to deal with it.
London-based Corinne says she was inspired to turn to AI after hearing her housemate talk positively about using it for dating advice, including how to break up with someone.
She said she would ask it to respond to her questions in the same style as popular relationship expert Jillian Turecki or holistic psychologist Dr Nicole LePera, both very popular on social media.
When she started dating again at the start of the year she turned to it again, again asking for advice in the style of her favourite relationship experts.
“Around January I had been on a date with a guy and I didn’t find him physically attractive but we get on really well so I asked it if it was worth going on another date. I knew they would say yes as I read their books but it was nice to have the advice tailored to my scenario.”
Corinne, who has a therapist, says the discussions with her therapist delve more into childhood than the questions she raises with ChatGPT over dating or relationship queries.
She says that she treats AI advice with “a bit of distance”.
“I can imagine people ending relationships and perhaps having conversations they shouldn’t be having yet [with their partner] as ChatGPT just repeats back what it thinks you want to hear.
“It’s good in life’s stressful moments. And when a friend isn’t around. It calms me down.”
-
African History2 years agoBlack History Facts I had to Learn on My Own pt.6 📜
-
African History1 year agoIf Africa never existed ???😲😭
-
African History1 year agoBlack history is f**ked up! 🫢 #ChappellesShow #DaveChappelle #Comedy #BHM #BlackHistoryMonth
-
African History6 years agoA Closer Look: Afro-Mexicans 🇲🇽
-
Weather1 year agoNavigating My First Weeks at the Met Office and Championing Climate Science
-
African History3 years agoNo African pre-Columbus DNA? 🤯🤯 #history #mesoamerica #mexico #african
-
African History2 years ago😱Ancient African Technologies that would Surprise most people! Part 2 #africanhistory
-
African History3 years agoMajor African Tribes taken away during the Atlantic Slave Trade🌍 #slavetrade #africanamericanhistory




