Children’s Agency Shelves AI System For Identifying Abuse; Test Run Found Errors in 60% of System’s Decisions

The Children and Families Agency
7:00 JST, March 4, 2025
The Children and Families Agency has decided not to introduce a system that uses AI to determine whether children suspected of being abused should be placed in temporary custody, it has been learned.
The government had been developing the system since fiscal 2021, at a cost of about ¥1 billion. It was expected to help child consultation centers decide whether to place children who were possibly being abused in temporary custody, but during the testing phase, the system’s AI made mistakes in 60% of its decisions.
It was concluded that the AI was not suited to making decisions about abuse and that it would be difficult to put the system into practical use.
Temporary custody is a measure stipulated in the Child Welfare Law. If it is suspected that children younger than 18 are being abused, they can be taken out of their home at the discretion of child consultation centers.
According to agency officials, the system was designed to be used by child consultation centers across the nation, which are suffering from chronic staff shortages.
The AI installed in the system learned from about 5,000 cases of child abuse. When information is entered into the system on 91 points, including whether there are any injuries, where those injuries are located, and the attitude of the parents, the system displays a score of 0 to 100 to show the likelihood of abuse.
A prototype has almost been completed, and this fiscal year, with the cooperation of child consultation centers in 10 local governments, verification tests were conducted to determine the risk in 100 actual cases of abuse.
However, when its accuracy was checked by senior officials at each center, the system reportedly provided questionable responses in 62 of the 100 cases, such as “the possibility of abuse is extremely low.”
In one case, a child testified that their mother “almost killed me.” Despite the fact that the child said their mother had grabbed their clothes and slammed their head onto the floor, the score was only “2-3.” This is thought to be because there were no bruises or other marks on the child’s body.
The agency had aimed to introduce the system this fiscal year, but it has decided to suspend development and abandon the project, on the grounds that “it is too early to provide the system to child consultation centers.” It will consider whether to resume the project while monitoring advances made in AI.
Several experts argued that it was difficult to AI accurately determine when there has been abuse, the specifics of which will vary in each case. They also said 5,000 cases were too few for the system’s AI to learn from.
“Important factors such as weight loss in children were not included” as information to be entered in the system, a person related to the agency said. In addition, the system is designed for the user to just enter “yes” or “no” regarding the 91 points. It does not require the user to enter the extent or range of an injury, for example, even if an injury exists — a factor that is thought to have reduced the accuracy of the system.
“AI is not a magic wand that can do anything,” said Prof. Ichiro Sato of the National Institute of Informatics. “It won’t work unless you examine its feasibility and design the system carefully before development.
“The use of AI is expected to progress in government ministries and agencies, and it’s necessary to share this failure with other ministries and local governments and make use of it in the future.”
"Society" POPULAR ARTICLE
-
Snow Falls in Tokyo; Temperature in Tokyo Turns from Spring to Winter in 1 Day (UPDATE 1)
-
Woman in 20s Believed Live-streaming on Tokyo Street Stabbed to Death; Man at Scene Arrested (UPDATE 1)
-
Roles of Social Media in Elections: Election Admin Commissions Powerless Against Campaign Obstruction
-
Snow Expected in Tokyo Metropolitan Area
-
People Celebrate Japan Emperor’s 65th Birthday at Palace (Update2)
JN ACCESS RANKING