StocksUS Markets

US Senate Committee Probes Tech Executives on Election Threats, According to Reuters

By Katie Paul

NEW YORK (Reuters) – U.S. lawmakers scrutinized technology executives on Wednesday regarding their strategies to combat foreign disinformation in anticipation of the elections in November. Both lawmakers and tech leaders pinpointed the 48 hours surrounding Election Day as the period most susceptible to disinformation efforts.

“There is a potential moment of peril ahead. Today we are 48 days away from the election… the most perilous moment will come, I think, 48 hours before the election,” stated Brad Smith, President of Microsoft, during a hearing convened by the U.S. Senate Intelligence Committee.

Senator Mark Warner, chairing the committee, echoed Smith’s concerns. He remarked that the 48 hours following the close of polls on November 5 could also be “equally if not more significant,” particularly in a tightly contested election.

Executives from Google and Meta, the parent company of Facebook, Instagram, and WhatsApp, also provided testimony during the hearing.

Although Elon Musk’s platform X was invited to speak, it declined to attend, according to several senators. A spokesperson for X indicated that the company’s invited witness, former global affairs head Nick Pickles, had resigned earlier this month.

TikTok did not receive an invitation to participate, a company representative confirmed.

To emphasize his worries regarding the time just before voting, Smith cited an incident from Slovakia’s 2023 election, where a false audio recording allegedly featuring a political leader discussing vote rigging surfaced shortly before the election, quickly spreading online.

Warner and other senators highlighted tactics revealed in a recent U.S. crackdown on supposed Russian influence campaigns, which included fake websites mimicking legitimate U.S. news organizations.

“How does this get through? How do we know how extensive this is?” Warner posed to the tech executives, requesting data from the companies by the following week detailing how many Americans engaged with the disinformation and the volume of advertisements promoting it.

In response to the challenges posed by new generative artificial intelligence technologies, tech companies have generally adopted labeling and watermarking measures to mitigate risks associated with the easy creation of realistic but deceptive images, audio, and video.

When asked how their companies would respond to the emergence of a deepfake of a political candidate shortly before the elections, both Smith and Meta’s President of Global Affairs Nick Clegg affirmed that their companies would apply labels to such content. Clegg added that Meta might also consider limiting the spread of such material.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker