What is Australia’s Online Safety Amendment about? | Explained

What is Australia’s Online Safety Amendment about? | Explained

The story so far:

Australia’s House of Representatives recently passed the “Online Safety Amendment (Social Media Minimum Age) Bill, 2024” which imposes obligation on certain social media platforms to take reasonable steps to prevent children under 16 years of age from having an account. The Bill shall also make an amendment to the Age Discrimination Act, 2024 to facilitate this reform. Once the Bill is passed by the Senate, it will become law on getting the Royal Assent.

What is the new law about?

There is no current legislated minimum age for social media use in Australia. Children are usually unable to create social media accounts under the age of 13 years, a restriction imposed by global platforms in line with the United States’ law on children’s online privacy.

The object of the amendment (a new Part 4A- social media minimum age inserted in Australia’s existing the Online Safety Act of 2021) is to ‘reduce the risk of harm to age-restricted users from certain kinds of social media platforms’. The age-restricted user shall mean ‘an Australian child who has not reached 16 years’.

The age-restricted social media platforms (ARSMP) affected by the proposed amendment would cover (with some exclusions) an electronic service which enables online social interaction between two or more end-users, and allows end-users to post material on the service.

The Australian Minister of Communication clarified that the government expects the ARSMP will, at minimum, include ‘TikTok, Facebook, Snapchat, Reddit, Instagram, X, among others. These services will be required to take ‘reasonable steps’ to prevent persons under 16 years of age from creating and holding an account. However, the Minister shall also have discretion to keep ‘a digital service out of the scope of the definition of ARSMP’.

How will the ban be implemented?

The law proposes that the providers of ARSMPs ‘must take reasonable steps to prevent age-restricted users having accounts with the age-restricted social media platforms’. Failing to meet this requirement may result in a maximum civil penalty of $49.5 millions. However, what is meant by ‘reasonable steps’ is not defined within the Bill.

It shall be the duty of the eSafety Commissioner to formulate, in writing, guidelines for taking reasonable steps to prevent age-restricted users having accounts with age-restricted social media platforms and to promote those guidelines.

The eSafety commissioner is Australia’s independent regulator for online safety. eSafety supports individuals and promotes online safety for all Australians. The Age Verification Roadmap, released by eSafety in March, 2023, found that ‘the age assurance market is immature but developing. Each technology had benefits and trade-offs’.

The proposed restriction will not take place earlier than 12 months after the proposed day of enforcement. The affected stakeholders shall be consulted and government’s age assurance trial will guide the industry on which age assurance technologies would be considered ‘reasonable’ and consistent with minimum age obligation. However, it was confirmed that all account holders on ARSMPs will have to verify their age.

The law does not otherwise place any obligation on ARSMPs to prohibit people under the age of 16 from accessing content on their platforms. There is no civil penalty for parents who provide access to ARSMPs for children under 16.

What are privacy concerns?

It has been observed that ‘age assurance technologies can pose privacy risks due to the type and amount of data they collect, store, use, and share’. With regard to privacy concerns, the proposed law will establish privacy obligations where an ‘entity’ holds personal information about an individual that was collected for the purpose of taking reasonable steps to establish identity. Penalties may be imposed under the Privacy Act, 1988 if the entity uses or discloses information, without falling within one of the exceptions under the Act.

There will also be an obligation on entities to destroy the collected information ‘after using or disclosing it for the purposes for which it was collected. The government also announced its intention to legislate a ‘Digital Duty of Care’ to ‘place the onus on digital platforms to proactively keep Australians safe and better prevent online harms.’

Is social media harmful to children?

Emerging research indicates that social media may impact children’s mental health. Despite various benefits, the risks of social media are also well acknowledged. Risks may increase for children who do not have skills, experience, and cognitive capacity to navigate a complex environment. The report of the Joint Select Committee on Social Media and Australian Society acknowledged that the undeniable harms include ‘the effects of disordered eating, bullying and self-harm’.

Despite the above report, a blanket ban to prohibit children from using social media is not considered to be the most advantageous solution. Some researchers and academics expressed concern ‘that a ban is too blunt an instrument to address risks effectively’. The Australian Greens criticised saying that the legislation was ‘rushed, reckless and goes against the evidence’.

Did any other country try such a ban?

Australia is perhaps the first country to have initiated such legislation. The Communication Decency Act of 1996 of the U.S. was the first attempt to make Internet safe for minors. However, it was struck down by the U.S. Supreme Court stating that the portion intended to protect minors from indecent speech was too broad and was an unconstitutional abridgement of the first amendment and right to free speech. The second attempt to protect minors from exposure to sexually explicit materials on Internet was also defeated and the Supreme Court struck down the Children’s Online Protection Act of 1998.

The U.S. Congress then came up with the Children’s Internet Protection Act in 2000 which required the schools and libraries to install filters on computers used by minors or lose federal funds. To be compliant with the law, libraries must certify that they have filtering technology in place. Thus, the United States has a law which protects children in schools and libraries from indecent materials.

What is India’s position on social media restrictions for children?

India has laws which punish various acts of online ‘child pornography’ (child abuse and exploitative material), but they do not prohibit children from creating and holding account on social media platforms.

Currently, though intermediaries and social media platforms are required to observe ‘due diligence’ to prevent commission of various online crimes, they are protected under Section 79 of the IT Act for contents posted by third parties. The Union Minister of Information and Broadcasting recently said that social media platforms need to be more responsive in checking indecent materials posted by third parties. But there is no indication whether Government is contemplating to bring any legislation to restrict children of certain age from having an account on social media platforms.

R.K. Vij, a former Indian Police Service officer

Source link

#Australias #Online #Safety #Amendment #Explained