By Adam Lori | Last Upload on February 27th, 2023 | Home → Cybersecurity → Cybersecurity & Critical Infrastructure
With the new coming use of AI and machine learning models such as ChatGPT, there have
been births of new questions popping up such as “Will AI take my job!?” and especially within
the cyber security community “Will AI be used for cyber security, what is the good or the
“Well, in today’s article, we will discuss what AI and ML are, their differences, their use cases, and finally understand the possibilities that AI and ML will be used in cyber
security! It is important to note that this article is a formulated hypothesis generated based
on current known statistics and knowledge of artificial intelligence, machine learning models,
mathematics, algorithms, and experience with AI from the writer himself.
Do note that if you are looking for the utmost scientific result that you are most likely not going to get that as there can only be assumptions and hypotheses about what will happen and the possible use cases within cyber security.
What is ML and what is the difference between AI?
AI is a term that is used to refer to a field of study that aims to create programs that can perform a given task that would typically require human intelligence which can be something such as a simple problem-solving task that would require knowledge of the natural language. This can span from recognizing images and faces to solving problems such as bugs in code or finding solutions to code.
For example, the TabNine AI is an assistant that can help you during the development road-path by teaching you new ways of programming by recommending them given your project workspace. ML is a different form but under the same umbrella, given technical terms machine learning is a subset of artificial intelligence which focuses on the development of statistical models and algorithms which allow computers to grow their basically own form of consciousness which requires human input or data to make decisions without being programmed to do so. An example of a machine learning model is ChatGPT which is a chatbot which was not programmed to perform in a specific group or task but was rather trained using special algorithms and mathematical sequences to keep the model from taking the data and just copying it but rather actually learning from the data it was given to form its own thoughts and responses. ChatGPT was a model trained by OpenAI over the course of a few years with a a constant feed of human data that the machine and algorithm could understand to formulate its own output and ideas based on this. However despite it being able to formulate its own responses similar to a human that does not mean that the AI will be able to think of purely everything itself as it is trained on data given by humans which means in some cases it can repeat or re-use the same data resulting in plagiarism, faulty code etc. despite being almost perfect 90% of the time.
The use cases of AI and machine learning models such as TabNine
AI can be extremely helpful to people within the tech field and can also provide a very massive support system for people within the tech industry. AI systems such as tabnine being able to improve code as fast as possible while also teaching the programmer can help make human programs much more efficient while also providing the utmost secure code out there without the use case of constantly googling or trying to find a book or person to explain more. In more extreme cases AI and ML models may be used to help defend and attack security systems globally such as the pwnagotchi which is an AI that trains itself to attack better with a faster and wider impact per target it hits. AI like this is where the questions start to formulate as far as understanding the offensive and defensive roles in the cyber security and fields alike as well as the pros and cons of the impact that AI systems and ML models like TabNine and ChatGPT can leave on the job force and security force. This leads us to our next session to understand the possible end that AI can form for the next generations ahead.
The impact machine learning models and AI programs have
Currently the field of AI and ML is very very heavily researched and is constantly improving, however despite technology being better the one outlier here is that AI will always have a form of bias. This is known as algorithmic bias which in a sense means that as long as a form of data is fed by humans and that the algorithms are always going to cause the program to formulate its own consciousnesses that it will always have its own bias and ability to formulate its own opinions based on the data it is fed. For example, if you feed an ML model data about code to choose from, that ML model will take both inputs and formulate its own opinion based on that input. Currently due to the current and active use of bias within these models, ML and AI both have a major impact on the way people write their code and the way people judge their own programs or create ideas. For example ChatGPT has been used often to create peoples projects but in the end the projects end up being a flop or end up being horribly vulnerable and plagued with non-performant. This is because like a human this machine learning model was fed and trained on data that possibly could have been biased to formulate its own opinion, humans have something similar where we can learn from two sources and formulate the best opinion. Because this ML model was able to develop its own code does not mean that it was
safe, but the human decided to trust it anyway with limited knowledge as to how these models work resulting in a program that was vulnerable and buggy.
This impact can be counted as negative, however on the other hand AI and ML models can teach people things if they are used correctly despite having some form of bias as most things in computer science typically formulated opinions or hypotheses can be tested and their outcome can be checked and verified, that is if the person on the opposing end even wants to. In the end the pros and the cons can not be fully determined because it all depends on the way the human uses the AI or ML model and feeds it questions or input. For example if I tell it to write so and so code and I slap it into my code editor there is a chance it will be vulnerable and not work due to older development standards it was trained on, however if I was to ask it specific questions revolving around a certain code base and idea I might end up learning something and being able to create a nicer sense of safer and more perform-ant development practices.
Solving the question “Will AI take my job”
The short answer being No, currently even in the further future AI even ML will not be able to take your job. However the long answer will rather be people who can possibly take your job by using AI and ML models to do the work for them. Recently it has become apparent that people will use ML models like ChatGPT to write papers, write ideas, develop new ideas and even do their homework. The simple reply to this is that “ oh it is an AI so it is perfect and these people will cheat their way to the top “, honestly given the current state and what has happened to those people really points out even people with AI or using AI to get to your job will not be there long. Most companies will fire you from copying code from other people and given ML models have its own consciousness it can count as code plagiarism and that person using ML and AI which they can tell the coder or programmer is using or student is using will end up being terminated. For example a group of programmers decided to risk it all by using ChatGPT to code up some long code snippets to make their job easier, later did you know they got fired because it ended up causing more issues with the application.
Another example is that students are complaining that when they used it for their assignments they got a plagiarism strike, but how!? Truth being told is that there are AI/ML detectors out there that can easily pick it up, not to mention let’s make a quick example of a programmer copying code from ChatGPT. It is easy to tell when someone is copying and pasting code from a ML model when their code style switches from using one liners and eliminating useless code to immediately making newbie mistakes in an advanced job position such as not creating functions to handle errors or allocating random memory spaces that are useless in the case of a program. For example say you tell ChatGPT to write a program in crystal that is a class and module to output hi world given a string.
but the programmer uses the
code style you can tell the obvious difference within the code and it is easy to tell this programmer just copied code from ChatGPT but what about in a more advanced code base?
It will obviously be much much more catch-able! So in our current example with the current state of AI with the way people are abusing it I do not think people are really going to be able to take your job even with using AI or ML models like ChatGPT. Companies have even started to limit the use of AI and ML models within their company stating that if someone is caught abusing the ML models that they will be terminated from the company and forced to leave.
Finalizing the question to say that AI will not take your job as it is far from it, even in the future when it starts becoming more used in the end, the cycle will repeat to a point where people will start looking for human hackers to exploit the flaws in these systems and use them to gain access to systems which will cause them to eliminate the use of AI and go back to humans then repeat the cycle again. The use cases for AI in cyber security AI and ML have completely different use cases for the cyber security realm, some AI programs are already being used to check and verify signatures and learn to detect malware in systems while others are being used to attack systems. Currently, it is too new to say oh well AI is bad for cyber security because how we have yet to see full fledged reports and research teams work with AI and machine learning models to defend security systems or work against them. Most likely the factor that AI will always be able to defend may be false, think of it like some game’s anti-cheat system.
The anti-cheat system is there to prevent the game from being exploited but if the anti-cheat system has a flaw in its own programs and processes that can then lead the AntiCheat system becomes vulnerable which in the end results in it being by passable thus defeating the whole purpose of the system in the first place which is to protect that software from being attacked. It is safe to say that no matter what we use whether it is AI or ML that systems will always be weak and human hackers will always find a way around those systems and find ways around them despite their systems being intensely secure and looked over 24/7 by the system itself. The use cases for AI can also be good but as mentioned above bad as well as they can be used to help teach and educate people about the world of cyber security or also be used to attack and kill off systems meant to do good. In the end the best thing we can do for right now is just wait and see what comes next!
The anti-cheat system is there to prevent the game from being exploited butif the anti-cheat system has a flaw in its own programs and processes that can then lead theAntiCheat system becomes vulnerable which in the end results in it being by passable thusdefeating the whole purpose of the system in the first place which is to protect that softwarefrom being attacked. It is safe to say that no matter what we use whether it is AI or ML thatsystems will always be weak and human hackers will always find a way around those systemsand find ways around them despite their systems being intensely secure and looked over 24/7by the system itself. The use cases for AI can also be good but as mentioned above bad as wellas they can be used to help teach and educate people about the world of cyber security or alsobe used to attack and kill off systems meant to do good. In the end the best thing we can do forright now is just wait and see what comes next!
The world of AI and ML is intense and it takes a lot of understanding of ML and AI models to truly formulate a decent hypothesis as to what will happen in the future with AI and ML models alike. Will they take our jobs and force us all inside? Most likely not because even if AI can do things such as programming and security research there still will need to be humans to keep up that software to program the software and to create more algorithms to train more safe and acceptable models.
Sure AI may be abused by people even though it should not, but this will be caught easily in the future as it is right now and as it currently is being tackled. I really do hope and can say for the most part this whole popularity of AI will drop soon if not later and people will stop abusing these systems eventually because of how many people are getting fired or dropped from their schools. I hope you enjoyed this article on AI and ML research, hopefully it was able to give you a good understanding of where to start in this realm and I hope it answered some questions for you!