WEB SAFETY - Part Two


Alexa and Google Home eavesdrop and phish passwords

Amazon- and Google-approved apps turned both voice-controlled devices into "smart spies."

Dan Goodin

Enlarge

Aurich Lawson / Amazon



By now, the privacy threats posed by Amazon Alexa and Google Home are common knowledge. Workers for both companies routinely listen to audio of users—recordings of which can be kept forever—and the sounds the devices capture can be used in criminal trials.

Now, there's a new concern: malicious apps developed by third parties and hosted by Amazon or Google. The threat isn't just theoretical. Whitehat hackers at Germany's Security Research Labs developed eight apps—four Alexa "skills" and four Google Home "actions"—that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these "smart spies," as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords.

"It was always clear that those voice assistants have privacy implications—with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes," Fabian Bräunlein, senior security consultant at SRLabs, told me. "We now show that, not only the manufacturers, but... also hackers can abuse those voice assistants to intrude on someone's privacy."

The malicious apps had different names and slightly different ways of working, but they all followed similar flows. A user would say a phrase such as: "Hey Alexa, ask My Lucky Horoscope to give me the horoscope for Taurus" or "OK Google, ask My Lucky Horoscope to give me the horoscope for Taurus." The eavesdropping apps responded with the requested information while the phishing apps gave a fake error message. Then the apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack.

As the following two videos show, the eavesdropping apps gave the expected responses and then went silent. In one case, an app went silent because the task was completed, and, in another instance, an app went silent because the user gave the command "stop," which Alexa uses to terminate apps. But the apps quietly logged all conversations within earshot of the device and sent a copy to a developer-designated server.

The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn't available in that user's country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed.

SRLabs eventually took down all four apps demoed. More recently, the researchers developed four German-language apps that worked similarly. All eight of them passed inspection by Amazon and Google. The four newer ones were taken down only after the researchers privately reported their results to Amazon and Google. As with most skills and actions, users didn't need to download anything. Simply saying the proper phrases into a device was enough for the apps to run.

All of the malicious apps used common building blocks to mask their malicious behaviors. The first was exploiting a flaw in both Alexa and Google Home when their text-to-speech engines received instructions to speak the character "�." (U+D801, dot, space). The unpronounceable sequence caused both devices to remain silent even while the apps were still running. The silence gave the impression the apps had terminated, even when they remained running.

The apps used other tricks to deceive users. In the parlance of voice apps, "Hey Alexa" and "OK Google" are known as "wake" words that activate the devices; "My Lucky Horoscope" is an "invocation" phrase used to start a particular skill or action; "give me the horoscope" is an "intent" that tells the app which function to call; and "taurus" is a "slot" value that acts like a variable. After the apps received initial approval, the SRLabs developers manipulated intents such as "stop" and "start" to give them new functions that caused the apps to listen and log conversations.

Others at SRLabs who worked on the project include security researcher Luise Frerichs and Karsten Nohl, the firm's chief scientist. In a post documenting the apps, the researchers explained how they developed the Alexa phishing skills:

1. Create a seemingly innocent skill that already contains two intents:
– an intent that is started by "stop" and copies the stop intent
– an intent that is started by a certain, commonly used word and saves the following words as slot values. This intent behaves like the fallback intent.
2. After Amazon's review, change the first intent to say goodbye, but then keep the session open and extend the eavesdrop time by adding the character sequence "(U+D801, dot, space)" multiple times to the speech prompt.
3. Change the second intent to not react at all
When the user now tries to end the skill, they hear a goodbye message, but the skill keeps running for several more seconds. If the user starts a sentence beginning with the selected word in this time, the intent will save the sentence as slot values and send them to the attacker.

To develop the Google Home eavesdropping actions:

1. Create an Action and submit it for review.
2. After review, change the main intent to end with the Bye earcon sound (by playing a recording using the Speech Synthesis Markup Language (SSML)) and set expectUserResponse to true. This sound is usually understood as signaling that a voice app has finished. After that, add several noInputPrompts consisting only of a short silence, using the SSML element or the unpronounceable Unicode character sequence "�."
3. Create a second intent that is called whenever an actions.intent.TEXT request is received. This intent outputs a short silence and defines several silent noInputPrompts.
After outputting the requested information and playing the earcon, the Google Home device waits for approximately 9 seconds for speech input. If none is detected, the device "outputs" a short silence and waits again for user input. If no speech is detected within 3 iterations, the Action stops.
When speech input is detected, a second intent is called. This intent only consists of one silent output, again with multiple silent reprompt texts. Every time speech is detected, this Intent is called and the reprompt count is reset.
The hacker receives a full transcript of the user's subsequent conversations, until there is at least a 30-second break of detected speech. (This can be extended by extending the silence duration, during which the eavesdropping is paused.)
In this state, the Google Home Device will also forward all commands prefixed by "OK Google" (except "stop") to the hacker. Therefore, the hacker could also use this hack to imitate other applications, man-in-the-middle the user's interaction with the spoofed Actions, and start believable phishing attacks.

SRLabs privately reported the results of its research to Amazon and Google. In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future. In a statement, Amazon representatives provided the following statement and FAQ (emphasis added for clarity):

Customer trust is important to us, and we conduct security reviews as part of the skill certification process. We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.
On the record Q&A:
1) Why is it possible for the skill created by the researchers to get a rough transcript of what a customer says after they said "stop" to the skill?
This is no longer possible for skills being submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.
2) Why is it possible for SR Labs to prompt skill users to install a fake security update and then ask them to enter a password?
We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified. This includes preventing skills from asking customers for their Amazon passwords.
It's also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password.

Google representatives, meanwhile, wrote:

All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future.

Google didn't say what these additional mechanisms are. On background, a representative said company employees are conducting a review of all third-party actions available from Google, and during that time, some may be paused temporarily. Once the review is completed, actions that passed will once again become available.

It's encouraging that Amazon and Google have removed the apps and are strengthening their review processes to prevent similar apps from becoming available. But the SRLabs' success raises serious concerns. Google Play has a long history of hosting malicious apps that push sophisticated surveillance malware—in at least one case, researchers said, so that Egypt's government could spy on its own citizens. Other malicious Google Play apps have stolen users' cryptocurrency and executed secret payloads. These kinds of apps have routinely slipped through Google's vetting process for years.

There's little or no evidence third-party apps are actively threatening Alexa and Google Home users now, but the SRLabs research suggests that possibility is by no means farfetched. I've long remained convinced that the risks posed by Alexa, Google Home, and other always-listening apps outweigh their benefits. SRLabs' Smart Spies research only adds to my belief that these devices shouldn't be trusted by most people.


Dan Goodin Dan is the Security Editor at Ars Technica, which he joined in 2012 after working for The Register, the Associated Press, Bloomberg News, and other publications.

FSB's secret projects

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.

Additional Academic, Federal and Journalism sources providing the citations, assertions, and the evidence proving, the above points herein:




How Artists And Fans Stopped Facial Recognition From Invading Music Festivals

The surveillance dystopia of our nightmares is not inevitable — and the way we kept it out of concerts and festivals is a lesson for the future.

Imagine showing up at a music festival or concert and being required to stand in front of a device that scans and analyzes your face.

Once your facial features are mapped and stored in a database, a computer algorithm could then decide that you are drunk and should be denied entry, or that you look “suspicious” and should be flagged for additional screening. If you make it through security, facial recognition technology could then be used to track the minute details of your movements once inside.

Face scanning software could be used to police behavior — constantly scanning the crowd for drug use or rule-breaking — or for strictly commercial purposes, like showing you targeted ads, monitoring which artists you came to see, or tracking how many times you go to the bar or the bathroom. Festival organizers could be forced to hand this trove of sensitive biometric data over to law enforcement or immigration authorities, and armed officers could pull people out of the crowd because they have an outstanding warrant or a deportation order. If you’re a person of color, or your gender presentation doesn’t conform to the computer’s stereotypes, you’d be more likely to be falsely flagged by the system.

This surveillance nightmare almost became a reality at US music events. Industry giants like Ticketmaster invested money in companies like Blink Identity, a startup run by ex–defense contractors who helped build the US military’s facial recognition system in Afghanistan. These vendors, and the venture capitalists who backed them, saw the live music industry as a huge potential market for biometric surveillance tech, marketed as a convenient ticketing option to concertgoers.

But now, it seems they’ll be sorely disappointed — and there's a lesson in the story of how we dashed their dystopian profit dreams. A future where we are constantly subjected to corporate and government surveillance is not inevitable, but it’s coming fast unless we act now.

Over the last month, artists and fans waged a grassroots war to stop Orwellian surveillance technology from invading live music events. Today we declare victory. Our campaign pushed more than 40 of the world’s largest music festivals — like Coachella, Bonnaroo, and SXSW — to go on the record and state clearly that they have no plans to use facial recognition technology at their events. Facing backlash, Ticketmaster all but threw Blink Identity under the bus, distancing itself from the surveillance startup it boasted about partnering with just a year ago. This victory is the first major blow to the spread of commercial facial recognition in the United States, and its significance cannot be overstated.

In a few short weeks, using basic grassroots activism tactics like online petitions, social media pressure, and an economic boycott targeting festival sponsors, artists and fans killed the idea of facial recognition at US music festivals. Now we need to do the same for sporting events, transportation, public housing, schools, law enforcement agencies, and all public places. And there’s no time to lose.

Facial recognition is spreading like an epidemic. It’s being deployed by police departments in cities like Detroit, disproportionately targeting low-income people of color. Immigration and Customs Enforcement (ICE) are using it to systematically comb through millions of driver's license photos and target undocumented people for apprehension and deportation. Cameras equipped with facial recognition software are scanning thousands of people’s faces right now in shopping malls, casinos, big box stores, and hotels. Schools are using it to police our children’s attendance and behavior, with black and Latinx students most likely to end up on watch lists. Major airlines are rapidly adopting it as part of the boarding process. France is about to institute a national facial recognition database. Police and corporate developers in the UK are defending their use of the tech. In China, where authorities have already used facial recognition to arrest people out of crowds at music festivals, the government is making a face scan mandatory to access the Internet.

But in almost all of these cases, facial recognition is still in its early stages. It’s an experiment. And we’re the test subjects. If we accept ubiquitous biometric monitoring and normalize the idea of getting our faces scanned to get on a plane or pick up our kids from school, the experiment works and our fate is sealed. But if we organize — if we refuse to be lab rats in a digital panopticon — we can avert a future where all human movements and associations are tracked by artificial intelligence algorithms trained to look for and punish deviations from authoritarian norms.

Opposition to facial recognition is spreading almost as quickly as the tech itself. More than 30 organizations, ranging from the Council on American Islamic Relations to Greenpeace, have endorsed Fight for the Future’s BanFacialRecognition.com campaign, pushing lawmakers at the local, state, and federal level to halt face surveillance. Four cities have already banned government use of biometric spy tech. California banned its use in police body cameras. States like Michigan, Massachusetts and New York are considering legislation. Sweden recently banned facial recognition in schools after getting slapped with a fine under the GDPR data privacy regime. Leading 2020 candidates like Bernie Sanders and Beto O’Rourke have echoed grassroots calls for a ban, and there’s rare bipartisan agreement in Congress, where lawmakers as diametrically opposed as Alexandria Ocasio-Cortez and Jim Jordan agree that facial recognition poses a unique threat to privacy and civil liberties.

When it comes to automated and insidious invasions of our personal lives and most basic rights, tech lobbyists and politicians sell a calculated brand of cynicism. They want us to believe that the widespread use of deeply creepy technology like facial recognition is a forgone conclusion, that we should get used to it, and that the only questions to address are how, where, and how quickly to roll it out. We can prove them wrong, by channeling our ambient anxiety and online outrage into meaningful action and political power.

Surveillance profiteers who hope to make a lot of money selling facial recognition software to governments and private interests are now on high alert. They’re watching closely for public reactions, running tests to see just how much intrusive monitoring we’re willing to put up with. They’re manipulatively calling for regulation –– a trap intended to assuage public fears while hastening adoption. They’re promising that facial recognition can be done in an “opt-in,” manner, ignoring the inherent dangers in corporate harvesting and storing of biometric data. But we can draw a line in the sand now, and shut down this unethical human experiment by pushing for legislation to ban facial recognition, and refusing to support corporations who use it.

We have a chance to stop the proliferation of surveillance technology that rivals nuclear weapons in the threat that it poses to the future of humanity. The clock is ticking.



THE LATEST DANGERS OF FACE-TRACKING

Face-tracking harvesters grab one picture of you and then use AI to find every other digital picture of you on the web. They open every social media post, resume, news clipping, dating account etc. and sell the full dossier on you to Axciom, the NSA, Political manipulators etc. and hack your bank accounts and credit cards. Never put an unsecured photo of yourself online. Anybody can take a screen grab of your photo on here, put it in Google's or Palantir's reverse image search, find all your other images and social media accounts online and get into your bank account or medical records in 30 minutes. The fact of the internet's failed security is in the headlines every day. The danger of posting pictures on the web is pretty clearly covered in every major newspaper. Fusion GPS, Black Cube and political operatives harvest every photo on here every hour and use the data to spy on people for political dirty tricks. The FBI, CIA, NSA and most 3-letter law enforcement spy operations copy everything on this site and analyze it. Don't you wonder why you never see anybody famous, political, in public service or in law on a dating site? Read Edward Snowden's book 'Permanent Record' or any weekly report at Krebs On Security. Huge numbers of the profiles on here are fake Nigerian scammer type things. 2D pictures have no bearing on 3D experiences of people in person. I am only interested in meeting people in person. Nobody has ever been killed at a Starbucks! There is nothing unsafe about meeting at a highly public Starbucks or Peets. I learned my lessons. There are hundreds of thousands of bait profiles on here. The real people show up for the coffee. The fake ones in Nigeria, and the political spies never show up in person and have a million carefully prepared excuses why not.


For example: Yandex is by far the best reverse image search engine, with a scary-powerful ability to recognize faces, landscapes, and objects. This Russian site draws heavily upon user-generated content, such as tourist review sites (e.g. FourSquare and TripAdvisor) and social networks (e.g. dating sites), for remarkably accurate results with facial and landscape recognition queries. To use Yandex, go to images.yandex.com, then choose the camera icon on the right. From there, you can either upload a saved image or type in the URL of one hosted online.


If you get stuck with the Russian user interface, look out for Выберите файл (Choose file), Введите адрес картинки (Enter image address), and Найти (Search). After searching, look out for Похожие картинки (Similar images), and Ещё похожие (More similar). The facial recognition algorithms used by Yandex are shockingly good. Not only will Yandex look for photographs that look similar to the one that has a face in it, but it will also look for other photographs of the same person (determined through matching facial similarities) with completely different lighting, background colors, and positions. Google and Bing also look for other photographs showing a person with similar clothes and general facial features, Yandex will search for those matches, and also other photographs of a facial match.


Any stranger could snap your picture on the sidewalk or on Match.com then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a report in The New York Times.


The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it's scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.


The size of the Clearview database dwarfs others in use by law enforcement. The FBI's own database, which taps passport and driver's license photos, is one of the largest, with over 641 million images of US citizens.


Political spies have even better programs than this do...watch out! The web is not safe!


You are being watched. Private and state-sponsored organizations are monitoring and recording your online activities. PrivacyTools provides services, tools and knowledge to protect your privacy against global mass surveillance.

Privacy Tools

Prefer the classic site? View a single-page layout.

Providers

Discover privacy-centric online services, including email providers, VPN operators, DNS administrators, and more!

Web Browsers

Find a web browser that respects your privacy, and discover how to harden your browser against tracking and leaks.

Software

Discover a variety of open source software built to protect your privacy and keep your digital data secure.

Operating Systems

Find out how your operating system is compromising your privacy, and what simple alternatives exist.

PrivacyTools Services

The PrivacyTools team is proud to launch a variety of privacy-centric online services, including a Mastodon instance, search engine, and more!

Privacy? I don't have anything to hide.

Over the last 16 months, as I've debated this issue around the world, every single time somebody has said to me, "I don't really worry about invasions of privacy because I don't have anything to hide." I always say the same thing to them. I get out a pen, I write down my email address. I say, "Here's my email address. What I want you to do when you get home is email me the passwords to all of your email accounts, not just the nice, respectable work one in your name, but all of them, because I want to be able to just troll through what it is you're doing online, read what I want to read and publish whatever I find interesting. After all, if you're not a bad person, if you're doing nothing wrong, you should have nothing to hide." Not a single person has taken me up on that offer.
Why privacy matters - TED Talk
The primary reason for window curtains in our house, is to stop people from being able to see in. The reason we don’t want them to see in is because we consider much of what we do inside our homes to be private. Whether that be having dinner at the table, watching a movie with your kids, or even engaging in intimate or sexual acts with your partner. None of these things are illegal by any means but even knowing this, we still keep the curtains and blinds on our windows. We clearly have this strong desire for privacy when it comes to our personal life and the public.
The Crypto Paper
[...] But saying that you don't need or want privacy because you have nothing to hide is to assume that no one should have, or could have, to hide anything -- including their immigration status, unemployment history, financial history, and health records. You're assuming that no one, including yourself, might object to revealing to anyone information about their religious beliefs, political affiliations, and sexual activities, as casually as some choose to reveal their movie and music tastes and reading preferences.
Permanent Record

Read also:

Quotes

Ultimately, saying that you don't care about privacy because you have nothing to hide is no different from saying you don't care about freedom of speech because you have nothing to say. Or that you don't care about freedom of the press because you don't like to read. Or that you don't care about freedom of religion because you don't believe in God. Or that you don't care about the freedom to peacably assemble because you're a lazy, antisocial agoraphobe.
Permanent Record
The NSA has built an infrastructure that allows it to intercept almost everything. With this capability, the vast majority of human communications are automatically ingested without targeting. If I wanted to see your emails or your wife's phone, all I have to do is use intercepts. I can get your emails, passwords, phone records, credit cards. I don't want to live in a society that does these sort of things... I do not want to live in a world where everything I do and say is recorded. That is not something I am willing to support or live under.
The Guardian
We all need places where we can go to explore without the judgmental eyes of other people being cast upon us, only in a realm where we're not being watched can we really test the limits of who we want to be. It's really in the private realm where dissent, creativity and personal exploration lie.
Huffington Post

More Privacy Resources

Guides

Information

Tools

Note: Just being open source does not make software secure!

Participate with suggestions and constructive criticism

It's important for a website like PrivacyTools to stay up-to-date. Keep an eye on software updates for the applications listed on our site. Follow recent news about providers that we recommend. We try our best to keep up, but we're not perfect and the internet is changing fast. If you find an error, or you think a provider should not be listed here, or a qualified service provider is missing, or a browser plugin is not the best choice anymore, or anything else... Talk to us please. You can also find us on our own Mastodon instance or on Matrix at #general:privacytools.io.



WASHINGTON (AP) — A government watchdog is launching a nationwide probe into how marketers may be getting seniors’ personal Medicare information aided by apparent misuse of a government system, officials said Friday.

The audit will be formally announced next week said Tesia Williams, a spokeswoman for the Health and Human Services inspector general’s office. It follows a narrower probe which found that an electronic system for pharmacies to verify Medicare coverage was being used for potentially inappropriate searchers seemingly tied to marketing. It raised red flags about possible fraud.

The watchdog agency’s decision comes amid a wave of relentlessly efficient telemarketing scams targeting Medicare recipients and involving everything from back braces to DNA cheek swabs.

For years, seniors have been admonished not to give out their Medicare information to people they don’t know. But a report on the inspector general’s initial probe,also released Friday, details how sensitive details can still get to marketers. It can happen even when a Medicare beneficiary thinks he or she is dealing with a trustworthy entity such as a pharmacy or doctor’s office.

Key personal details gleaned from Medicare’s files can then be cross-referenced with databases of individual phone numbers, allowing marketers to home in with their calls.

The initial audit focused on 30 pharmacies and other service providers that were frequently pinging a Medicare system created for drugstores.

The electronic system is intended to be used for verifying a senior’s eligibility at the sales counter. It can validate coverage and personal details on millions of individuals. Analyzing records that covered 2013-15, investigators discovered that most of the audited pharmacies, along with a software company and a drug compounding service also scrutinized, weren’t necessarily filling prescriptions.

Instead, they appeared to have been tapping into the system for potentially inappropriate marketing.

Medicare stipulates that the electronic queries — termed “E1 transactions”— are supposed to be used to bill for prescriptions. But investigators found that some pharmacies submitted tens of thousands of queries that could not be matched to prescriptions. In one case, a pharmacy submitted 181,963 such queries but only 41 could be linked to prescriptions.

The report found that on average 98% of the electronic queries from 25 service providers in the initial audit “were not associated with a prescription.” The inspector general’s office did not identify the pharmacies and service providers.

Pharmacies are able to access coverage data on Medicare recipients by using a special provider number from the government.

But investigators found that four of the pharmacies they audited allowed marketing companies to use their provider numbers to ping Medicare. “This practice of granting telemarketers access to E1 transactions, or using E1 transactions for marketing purposes puts the privacy of the beneficiaries’ (personal information) at risk,” the report said.

Some pharmacies also used seniors’ information to contact doctors treating those beneficiaries to see if they would write prescriptions. Citing an example, the report said, “The doctor often informed (one) provider that the beneficiary did not need the medication.”

The inspector general’s office said it is investigating several health care providers for alleged fraud involving E1 transactions. Inappropriate use of Medicare’s eligibility system is probably just one of many paths through which telemarketers and other sales outfits can get sensitive personal information about beneficiaries, investigators said.

A group representing independent drugstores expressed support for the investigation. “It’s about time,” said Douglas Hoey, CEO of the National Community Pharmacists Association. “We welcome the effort to clean up this misbehavior.” Hoey said some local pharmacists have complained of what appear to be sophisticated schemes to poach customers who take high-cost drugs.

The watchdog agency began looking into the matter after the Centers for Medicare and Medicaid Services, or CMS, asked for an audit of a mail order pharmacy’s use of Medicare’s eligibility verification system.

In a formal response to the report, CMS Administrator Seema Verma said CMS retooled its verification system last year so it automatically kicks out queries that aren’t coming from a pharmacy. More than a quarter-million such requests have been rejected, she wrote.

Medicare is committed to ensuring that the system is used appropriately, Verma added. The agency can revoke access for pharmacies that misuse the privilege and is exploring other enforcement options.

The inspector general’s office acknowledged Medicare’s countermeasures but said it wants to see how effective they’ve been.

Health care fraud is a pervasive problem that costs taxpayers tens of billions of dollars a year. Its true extent is unknown, and some cases involve gray areas of complex payment policies.

In recent years, Medicare has gotten more sophisticated, adapting techniques used by financial companies to try to head off fraud. Law enforcement coordination has grown, with strike forces of federal prosecutors and agents, along with state counterparts, specializing in health care investigations.

Officials gave no timetable for completing the audit.