Possible ways to verify images and videos created by Artificial Intelligence

Artificial Intelligence
Artificial Intelligence

Primitive man to present man; Various inventions and processes and industrial revolution have played a role in this journey. Having passed through the three stages of the revolution, we are now on the way to the fourth industrial revolution. Among the main sectors of the fourth industrial revolution are big data, 3D printing, IoT or Internet of Things, robotics, drones, sensor technology, artificial intelligence and machine learning, augmented reality, nano technology, blockchain, etc. There are examples of false and misleading propaganda with information, images and videos created by artificial intelligence or AI by misusing the benefits of technology.

Artificial Intelligence

According to Investopedia, artificial intelligence (AI) is the imitation of human intelligence by machines or programs. Or it can be said that a program that can think like a human is called artificial intelligence. This is the name given to the artificially generated intelligence program. These machines or programs are designed to think like humans and imitate human actions. Artificial intelligence can be applied to any machine that possesses characteristics associated with the human mind. Like learning something or solving a problem.

According to a report by Dainik Ittefaq, Artificial Intelligence is the artificial intelligence and thinking of humans that are implemented through technology-based machines. It is a branch of computer science that tries to simulate human intelligence and thinking by computers.

Content generated by artificial intelligence or A.I

There are mainly three types of content generated by artificial intelligence or AI. The three types of content verification require different methods.

  • Information or text content generated by artificial intelligence
  • Images or image content generated by artificial intelligence
  • Video or video content generated by artificial intelligence

Verification of content generated by artificial intelligence

Artificial Intelligence content can be said to be difficult to verify with absolute certainty. All the methods available so far can only give an idea of the probability that the content is AI generated. But it is difficult or almost impossible to prove it with a sure source. In these cases the decision is therefore based on a measure of probability. Various artificial intelligence content verification methods are gradually being developed and discovered. So it cannot be assured that the data can be verified by this method. Only a few possible methods can be mentioned in this regard. This applies to all types of AI content (text, images and videos). Some more AI programs or platforms will be known in this video report.

  • Find out if the various AI platforms have their own content detection or identification tools, if any, and enlist the help of those tools. Some such tools can be known from this report.
  • The help of various content detection tools should be taken by searching the internet. However, it is not possible to say exactly how successful these tools are in detection. Check out one such tool here.
  • A.I. can be inquired by contacting the organization’s email. However, only very important issues or topics that will have a negative impact if not verified should be given importance in mailing.

[ These procedures apply to verification of all types of content (text, images and videos) generated by artificial intelligence or AI]

Verification of information or text content generated by artificial intelligence

ChatGPT, the most talked-about authority in recent times when it comes to verifying text content generated by artificial intelligence, has developed a tool to identify text generated by A.I. itself. See the report of the technology site “The Verge” about it here.

Other possible processes to verify AI-generated text content include:

  • About the information, the source of the information or any other sources should be searched using Google or other search engines or Google Advanced Search.
  • Match the text with the answer by inputting the text content related questions or ideas about the input text automatically to various AI platforms. However, if the questions do not match, slightly different answers may be obtained. In that case all possible types of text input can be verified.
  • Check whether the same text or information is available from other authentic sites. If available, the date of content publication or platform (authentic source) should be used to decide from which site the information can be received. In many cases AI platforms themselves use data from different sites, in which case AI generated text can be ignored if the information is taken from the original source.
  • If you want to know about any data or statistics on the AI platform, the answer given by the AI should be cross-checked with the data of the concerned organization. Because it has been observed that an AI platform gives different information for the same answer.
  • The information displayed on the AI platform can be read and viewed. Since the AI platform generates it artificially, the wording and sentence structure may be mechanical rather than human like mechanical translation. People may unconsciously or naturally use different synonyms of the same word in a text. But this is not usually observed in case of AI generated content.
  • The help of plagiarism checker tools can be taken. Many times the content of a site is presented by AI with slight changes in different places, in this case the original content and the content created by AI can be separated.

An article on a site called GoldenPenguin.org discusses some other identification processes.

  1. Sentence Length:
  2. AI-generated content often has very short sentences. This is because AI is trying to mimic human writing, but it has not yet mastered massive sentence complexity.

  3. Repetition of words and phrases:
  4. Another way to identify AI-generated content is to look for repetition of words and phrases. If an article reads like the same words are being used over and over again, it’s more likely to be written by an A.I. In many cases AI repeats a word or phrase in its content so many times that it sounds unnatural.

  5. Lack of Analysis:
  6. A third way to tell if an analytical article is written by AI is if it lacks complex analysis. This is because machines are good at collecting data, but they are not yet that good at turning it into something meaningful.

  7. Inaccurate information:
  8. Misinformation is more common in AI-generated content descriptions. But this type of mistake can also be found in blog posts and articles. Because machines (AI programs) are collecting data from various sources, they sometimes make mistakes.

  9. Adjustments:
  10. AI content can sometimes lack relevance, especially when dealing with complex topics. On the other hand, human-written text is usually more coherent and written following a logical structure. AI generated content can be understood in many cases by reading it.

Verification of images generated by artificial intelligence

Some visible differences are observed between normal captured images and AI generated images. So if you observe the picture well, you can get an idea about it. That’s why all incongruous pictures should be suspicious. Again, it cannot be said for sure that all the images that are suspected to be AI generated are AI generated. There can be many types of homogeneous content, including sketches or art or digital art. So try to be sure of that too.

Apart from the general (common) process described above that applies to all types of AI content verification, some more potential methods can be used specifically for image verification.

  • The image should be reverse image searched to see if any information about this image is already on the internet.
  • Asking for information about the image from the original poster: If an image is suspected to be AI generated, try to confirm the original uploader of the image by using all possible captions or keywords associated with that caption or reverse search or by any means. . Then ask the uploader for information about the image. But in this case it is not always guaranteed that he will respond.
  • Social media and handles of platforms that generate images with AI programs can be searched for discussed images. However, most platforms have not yet started uploading or storing their generated content anywhere.
  • By looking at the received image, looking at the context of the image, creating the correct input keywords or instructions, those instructions should be inputted to various AI image generators. If the exact same or almost same type of image is obtained as output then the image in question can be considered as AI generated image.
  • Doubtful if a picture carries a new idea or an indication of an event or topic about a place but the actual image of that place is not available on the internet or in the media or if no information about that event is available, they can also be AI generated pictures. Also, if an event is mentioned in a place (in the picture) where that event is not possible to happen naturally then it can also be an AI generated picture. For example, if there is a picture of a national structure of Bangladesh covered in snow, it can be doubted. Because usually it doesn’t snow in Bangladesh, and if something like that happened then surely the media would have information about it.
  • Sometimes sketches or various drawings brought to life with the help of technology should be considered as technology-edited images even if they are not directly AI generated. In order to identify them, one should try to search through various methods including sketch related captions. If you find a name or source somewhere in the search, try to contact that source.
  • Looking for sources or evidence in the comment box: In many cases, in the case of such pictures on social media, the person who created the picture himself gives an opinion about the authenticity. Or someone who knows about the truth can comment.
  • Verifying whether there is a collection of images generated by connecting to the platforms. Trying to find a collection or database if there is one. For example, an image generator called Midjourney is added to an AI tools field to view images created from everyone’s input (in Discord).
  • Finding watermarks in images: Some AI platforms add watermarks to the images they create. Sometimes the creator of the input also adds a watermark and promotes it. In that case watermarks are available on these AI generated images. That watermark can be searched by its formula.
  • In most cases, the large and close content in the AI generated image is clear and perfect, but the small and distant content in the image is not perfect and clear, remaining blurry. Various such inconsistencies can be found.

Alongside the AI is a process of creating an image of a person or animal that is exactly alive. This is called Generative Adversarial Network or GAN content. This method mainly creates lifelike people, cats, horses, artworks, cities, etc. GAN content is more difficult and difficult to verify.

But with the passage of time, technological improvements are also being made to verify this content. One such GAN content verification site link can be found here. Also some more such tools are available on the internet. Even if no definitive solution is found, search help can be found here.

Verification of videos generated by artificial intelligence

A video created by inputting instructions to a machine program or AI program is called AI generated video. Even deepfake videos are created by artificial intelligence. However, in deepfake videos, fake videos are mainly made in the likeness of a star or a specific person. That’s why the processes for verifying deepfake videos are also applicable to verifying normal AI generated videos.

Ways to detect deepfake videos have also been discovered over time. It may still be possible to detect normal quality deepfake videos with the naked eye. abnormal facial expressions, abnormal eyelid movements or head movements or movements, unrealistic skin color or abnormal skin changes; Deepfake videos can be identified early on by looking at jitters, less matching of speech with lip movements, person’s face being blurred than the background, subject lighting issues, extra pixels in the frame, etc. (i.e. critical thinking). Deepfake videos can also be verified in many cases using various technical tools. Although there are no directly open tools for public use, some technology companies have internal tools that detect deepfakes. Research and efforts are also underway to develop new tools for detection.

Although audio manipulation, i.e. manipulation of sound, is also being discussed, most of the cases are in deepfake video format. So deepfake verification can be done by using the reverse image search tool with the most appropriate or different time frames from the video. Photo metadata or video metadata tools are used for images. InVid can be used for normal videos and Amnesty International’s “YouTube Data Viewer” for YouTube videos.

Face Forensics Tools was developed by a team at the Visual Computing Lab in Munich, Germany. The program can detect video distortion from raw-format files. However, they did not succeed in validating the results for videos compressed for the web.

As technology improves and deepfake technology or GAN processes become more sophisticated. In the near future it will be impossible to tell if a video is genuine or not.

That is why efforts are underway to develop AI-based countermeasures against deepfakes. But as technology continues to evolve, these countermeasures must keep pace. Recently, Facebook and Microsoft, along with a bunch of other companies and prominent US universities, formed a consortium behind the Deepfake Detection Challenge (DFDC). The initiative inspires researchers to develop technology that can detect whether a video has been altered using artificial intelligence.

Also, it is possible to detect deepfakes using biometrics. Biometrics are physical characteristics of our body. By knowing the biometrics of a person, it is possible to identify that person. Biometrics can be used to detect deepfakes in two ways, first is behavioral biometrics and the other is artificial intelligence technology or facial recognition.

The issues mentioned in detecting deepfake videos apply to general AI-generated video verification as well as thorough scrutiny of the video content. Also,

  • To search about the video, take screenshots or images from different scenes of the video and do a reverse image search of that image to see if any information about this image or video is already available on the Internet.
  • Finding out about and taking help of AI generated video recognition tools. Check out a Business Insider report on such tools here.

AI platforms can take initiatives that:

It would be easier if AI platforms took some measures or developed tools to verify this content. Because in recent times, wrong information has been spread about AI generated images. It is expected that this trend will gradually increase with time. Initiatives that AI platforms can take:

  • AI platforms have an option to verify whether text content is AI generated or not.
  • Keeping all image or video databases created through AI platforms open on their website or introducing on-demand facility for verification.
  • If left open, arrange for it to be found using its own general search method or reverse image search.
  • Or if it is not possible to keep the database open or to launch the normal on-demand process, then to launch the opportunity to get information in the case of verification or verification of the discussed image through the use of mail or in-website text method.
  • Or consider launching these opportunities exclusively for fact-checkers and media professionals through a collaboration. At least fact-checkers can verify and publish reports on content that may affect issues of national or international importance.

Therefore, any content that is in doubt should be verified. Whenever possible, all content should only be shared, trusted or commented upon after verification. In particular, all types of content must be verified for media or critical use.

Frequently Asked Question (FAQs):

  • Artificial Intelligence
  • Content generated by artificial intelligence or A.I
  • Verification of content generated by artificial intelligence
  • Verification of information or text content generated by artificial intelligence
  • Other possible processes to verify AI-generated text content include:
  • Verification of images generated by artificial intelligence
  • Verification of videos generated by artificial intelligence
  • AI platforms can take initiatives that

Leave a Comment

Prepazu Logo

Prepare for any job exam by regularly attending PREPAZU quiz sessions. We provide competitive exam Q&A, QUIZ. Try it out now Online Quiz, Recruitment Notification, Scheme, Q&A & MCQ.

dmca