This article examines the security of currently popular corporate instant messaging applications (messengers). A comparative analysis of the security of some solutions for corporate use has been carried out. The main result of the review is the conclusions about the advantages and disadvantages of the considered systems, which can be used by organizations to choose the appropriate solution.
Keywords: Information security, corporate messenger, messaging, internal communications, instant messaging systems, end-to-end encryption
At the moment, quantum key distribution (QKD) technology guarantees the highest level of data exchange security, which makes QKD networks one of the most promising areas in the field of computer security. Unfortunately, the problem of topology optimization when planning and extending QKD networks has not attracted enough attention. This paper reviews approaches that use analytical models in the topology optimization problem of quantum key distribution networks. Different methods that solve problems of network capacity and security maximization and cost minimization are reviewed, the utilized algorithms are described, and conclusions about possible further research in this area are drawn.
Keywords: quantum key distribution, mathematical modeling, network topology, analytical modeling, topology optimization
The purpose of the study is to focus on the need to introduce the concept of a cyber attack as an emergency situation to reduce the number of cyber threats in institutions, organizations and enterprises when using modern information technologies. The study is devoted to the analysis of cyber attacks on people’s life support facilities in the world and in Russia, in particular. Currently, the media in different countries of the modern world are talking about the avalanche growth in the number of cyber-attacks; this makes information security one of the most important issues in the activities of government agencies and production facilities. The problem of information security is studied in detail, taking into account the consequences of cyber attacks on technological systems in all spheres of human activity.
Keywords: cybersecurity, cyber emergency, monitoring, Zero Trust model, cyber insurance
The growing popularity of large language models in various fields of scientific and industrial activity leads to the emergence of solutions using these technologies for completely different tasks. This article suggests using the BERT, GPT, and GPT-2 language models to detect malicious code. The neural network model, previously trained on natural texts, is further trained on a preprocessed dataset containing program files with malicious and harmless code. The preprocessing of the dataset consists in the fact that program files in the form of machine instructions are translated into a textual description in a formalized language. The model trained in this way is used for the task of classifying software based on the indication of the content of malicious code in it. The article provides information about the conducted experiment on the use of the proposed model. The quality of this approach is evaluated in comparison with existing antivirus technologies. Ways to improve the characteristics of the model are also suggested.
Keywords: antivirus, neural network, language models, malicious code, machine learning, model training, fine tuning, BERT, GPT, GPT-2
The article proposes the use of intelligent methods for predicting the reliability of contract execution as a key element of the system for ensuring information security of the critical infrastructure of financial sector organizations. Based on the analysis of historical data and the use of machine learning methods, a comprehensive model for assessing and predicting the risks of failure or poor performance of contracts by suppliers has been developed. It is shown how the use of predictive analytics can improve the efficiency of information security risk management, optimize planning and resource allocation, and make informed decisions when interacting with suppliers of critical services and equipment.
Keywords: intelligent system, predictive analytics, information security, critical infrastructure, financial sector, contract execution, machine learning
In the process of ensuring information security, an important element is the protection of data from malicious influence. One of the stages of protection is to identify the source of the threat. The source of the threat may be the attacker himself, acting through his own user profile. Detecting and identifying a malicious profile in this case is a key element of building protection. In the context of user profiling, well-researched systems are social networks. Learning Management Systems (LMS), despite their similarity to social networks, remain devoid of scientific attention. Due to the similarity with social networks and the increasing introduction of learning management systems into information structures, this type of system is becoming a vulnerable point. There are several approaches and methods for detecting malicious activity, including statistical methods, machine learning methods, and rule-based methods. Our proposed approach combines a rule-based system with machine learning algorithms to improve the accuracy of detecting malicious activity. A rule-based system provides a detailed view of potential anomalies, while a machine learning algorithm provides a more detailed view of specific attacks. This paper highlights the aspects of determining malicious activity and user profiling in learning management systems and provides an example of one of the methods.
Keywords: information security, information system, social network, learning management system, user profiling, malicious behavior, classification
The article analyzes the relevance and state of modern phishing attacks on critical information infrastructure (CII) facilities. Phishing, as one of the most common types of cyber attacks, poses a serious threat to the security of information systems and data. The purpose of the study is to identify the main characteristics and tactics of phishing attacks, as well as to assess the level of protection of the OKII from this type of threat. The research uses data on the latest phishing trends and methods collected from various sources, including cybersecurity reports, incident statistics and analysis of successful attacks. The main focus is on analyzing the targets of phishing attacks in the context of their importance for ensuring the continuous operation of critical information infrastructure. Based on the analysis, recommendations are formulated to improve protection systems against phishing attacks for critical information infrastructure facilities. The purpose of this study is to raise awareness among cybersecurity professionals and security policy makers about the emerging risks of phishing. In addition, the main task is to ensure effective protection of information resources, which are an integral part of the functioning of critical infrastructure.
Keywords: information security, phishing attacks, information infrastructure, mathematical modeling, software package
An urgent task of many organizations is currently the issue of staffing in the field of information security. And this problem mostly lies in the difficulty of determining the correspondence between functional responsibilities and the norms of working hours for each individual specialist. At the moment, there are no regulatory documents that strictly outline the norms of labor costs for specific functional obligations of specialists, which would give an accurate answer on the quantitative composition of the information protection unit. In this paper, one of the solutions to the problem of constructing a quantitative model describing the number of information security units based on typical indicators of the organization's needs in the field of information security is implemented.
Keywords: information protection, regression model, model competition, adequacy criteria, forecasting, staffing of information protection units
The network component of the transition from monolithic application architecture to microservices is considered. The dynamics of the attack surface increase during the transition to microservice architecture is presented on the example of an application designed to keep a centralized record of server equipment and fix the employees responsible for its operation. Measures aimed at increasing the security of the network infrastructure of container orchestration systems are proposed.
Keywords: information security, microservice architecture, containerization security, network interactions, container network, container orchestration system, microservice, container, transition from monolith to microservices, increasing the attack surface
The article considers mathematical models for the collection and processing of voice content, on the basis of which a fundamentally logical scheme for predicting synthetic voice deepfakes has been developed. Experiments have been conducted on selected mathematical formulas and sets of python programming language libraries that allow real-time analysis of audio content in an organization. The software capabilities of neural networks for detecting voice fakes and generated synthetic (artificial) speech are considered and the main criteria for the study of voice messages are determined. Based on the results of the experiments, a mathematical apparatus has been formed that is necessary for positive solutions to problems of detecting voice deepfakes. A list of technical standards recommended for collecting voice information and improving the quality of information security in the organization has been formed.
Keywords: neural networks, detection of voice defects, information security, synthetic voice speech, voice deepfakes, technical standards for collecting voice information, algorithms for detecting audio deepfakes, voice cloning
With the development of digitalization of all spheres of society, the relevance of developing new software, and therefore methods of protecting it from illegal copying and reproduction, increases significantly. The main risks are related to both hacking of the finished release and leakage of individual code segments at the development stage. At the same time, the chances of leakage directly depend on both the number of specialists involved in the development process at different stages and the number of stages themselves. The purpose of this work is to develop an embedded module aimed at protecting software or its individual elements from illegal copying and further reproduction.
Keywords: software protection, information protection, embedded security module, hardware binding
The constant growth of cyber attacks on the financial sector requires the construction of a modern protection system based on the use of artificial intelligence or machine learning. The paper provides an analysis of specific products and solutions of the global market based on artificial intelligence technologies that can be used to protect critical information infrastructure.
Keywords: cyber attacks, critical infrastructure, artificial intelligence, information security, machine learning
This paper analyses intrusion detection techniques and provides recommendations for preventing intrusions in peer-to-peer wireless networks. Peer-to-peer wireless networks are particularly vulnerable to attack due to their openness, dynamically changing topology, collaborative algorithms, lack of centralised monitoring, centralised control point and lack of clear protection. Intrusion detection techniques exist in wired networks but they are not applicable in wireless environment. The paper also presents a new intrusion defence method based on intrusion detection in peer-to-peer wireless networks.
Keywords: security, vulnerability, information protection, attack, intrusion, wireless network, mobile network, detection system, IDS, MANET, DoS, DDoS
The paper is devoted to the development of a security concept for the protection of critical information infrastructure of the financial sector. The critical information infrastructure of the financial sector is analyzed, and the main types of cyberattacks are considered in relation to the objects in this area. The concept of security is proposed, including access control, multilevel protection, data encryption, continuous monitoring and other measures. Models of the main threats to the security of information infrastructure objects of the financial sector are given. The question of the importance of cooperation and information exchange between financial institutions, regulators and law enforcement agencies to ensure collective security of the financial sector is raised. The article will be useful for specialists in the field of information security, financial sector and managers of organizations interested in developing and improving the security system of information infrastructure of the enterprise.
Keywords: information security, information infrastructure, financial sector, mathematical modeling, software package
One of the most reliable methods of identity verification are biometric authentication methods. There are two types of methods: static and dynamic. Static methods include fingerprint scanning, 3D facial recognition, vein patterns, retina scanning, etc. Dynamic methods include voice verification, keyboard handwriting and signature recognition. As of today, static methods have the lowest type I and II error rates, because their primary principle of operation is based on capturing a person's biometric characteristics, which do not change throughout their lifetime. Unfortunately, this advantage, which accounts for such low type I and II error rates, is also a drawback when implementing this method for widespread use among internet services. If biometric data is compromised, user can no longer safely use method everywhere. Dynamic biometric authentication methods are based on a person's behavioral characteristics, allowing user to control information entered for authentication. However, behavioral characteristics are more vulnerable to changes than static, resulting in significantly different type I and II errors. The aim of this work is to analyze one of the dynamic methods of biometric authentication, which can be used in most internal and external information systems as a tool for authorization or confirmation of user intentions. Biometric user authentication based on their handwritten signature relies on comparing unique biometric features that can be extracted from signature image. These unique features are divided into two categories: static and dynamic. Static features are extracted from signature image, based on characteristics such as point coordinates, total length, and width of the signature. Dynamic features are based on coordinate dependency of the signature points over time. More unique features are identified and more accurately each is weighted, the better type I and II error rates will be. This work focuses on algorithms that extract unique features from static characteristics of signature, as most signature peculiarities are identified from the dependencies of writing individual segments of the signature image.
Keywords: static algorithms, metrics, signature length, scaling, signature angle
Currently, to access information contained in autonomous and external information systems, user must pass an authorization process using modern methods of identity verification, such as: password protection, protection based on one-time codes, electronic signature-based protection, etc. These methods as always have worked well and still continue to provide secure access, however, biometric authentication methods are more reliable when access to confidential information should be limited to a single user. Today, there are two types of biometric authentication methods: static and dynamic. Static methods based on a person's biological characteristics that remain with them throughout their life, while dynamic methods based on a person's behavioral characteristics. Static methods are considered some of the most accurate, because most biometric parameters do not change over a lifetime. However, this method should only be used if chance of data compromise is very low, because in the event of leak, user will not be able to continue using these types of methods anywhere else. Dynamic methods, due to their behavioral characteristics, do not have sufficiently satisfactory type I and II error rates, as they directly depend on user's psychological and physical state. However, unlike static methods, user can control the information that will serve as a secret key for authorization in the future, so in case of a leak, user can always change the contents of the key for current and future services. This work examines one of these dynamic methods of biometric authentication: verification by handwritten signature. This method is considered more attractive among its counterparts, as in case of successful type I and II error rates, it can be applied in most existing services as a tool for authentication and confirmation of user intentions when signing various types of documents. The article discusses the main algorithms for verifying handwritten signatures by identifying unique dynamic features, dependent on the temporal and coordinate values of the analyzed samples of handwritten signatures.
Keywords: dynamic algorithms, feature extraction, signature writing time, proximity of point coordinate functions, Fourier transform
The relevance of the study is justified by the growth of breaches in the field of information security, especially personal data on the global network. The study puts forward 2 hypotheses. The 1st - about the inverse relationship between the density of hacked accounts per 1000 people and the GCI and NCSI cybersecurity indices. At stage 1, this hypothesis was refuted. The 2nd hypothesis - about the existence of factors interacting with cybersecurity factors and affecting the leakage of personal data, was confirmed Then, data outliers were identified and excluded, the final sample was 149 countries. The identified factor was the GDP per capita indicator. After eliminating the influence of this factor, the correlation analysis confirmed the presence of an inverse relationship between the personal data leakage indicator in the form of the density indicator and the GCI and NCSI cybersecurity indices. The correlation coefficients were -0.43 and -0.48, respectively. The study suggested that there were problems with the completeness of the factors taken into account in the KB indicators and the reliability of the data received from governments, which requires further research.
Keywords: cyber breach, cybersecurity, GCI, NCSI, data leak, account hacking, personal data, sensitive information, data analysis, correlation
Destructive information in text is widespread and dangerous for children and teenagers as well as for adults. Current methods of searching destructive information in text: “keyword search”, “reverse document frequency method” have a number of disadvantages that can cause false positives, which reduces the accuracy of their work. In this article we consider a new developed method of searching destructive information in text, which is used in Python module. This method utilizes Spacy, pymorphy3 libraries which allows us to examine the sentence in detail and delve into its meaning. The developed method reduces false positives and thus increases the efficiency of its use. The paper shows the schemes of sentence parsing, the algorithm of the new method, as well as figures demonstrating its work. The comparative analysis of the new method with analogs is shown.
Keywords: Spacy, disruptive content, information security, TF-IDF, keyword search, pymorphy3, Net Nanny, CyberPatrol, Oculus, child protection
Images on web-sites, social networks, computers may contain destructive content and pose a threat to the psyche of a child or adolescent. Conventional image classification does not always classify it correctly and accordingly has a number of drawbacks due to which there can be false positives, which reduces the accuracy of classification. The paper presents a method in Python module that can detect malicious content in images. The method described in the paper is based on the use of Yolov8 library, which provides good classification of images and further analysis. Using the developed method it was possible to reduce the number of false positives, which led to an increase in its efficiency. The paper shows the scheme of operation of the new method, as well as demonstrated the search for objects in images. Similar programs are considered and their comparative analysis with the developed method is carried out.
Keywords: Spacy, disruptive images, information security, pymorphy3, ultralytics, disruptive text, disruptive content, YoloV8, child safety, benchmarking, digital hash
In the development of cloud providers, not only the types of services they provide play a significant role, but also fault tolerance to service failures. It is important for a cloud service provider to prepare and configure the server and service for fault-tolerant operation so that the customer works with a high degree of readiness and reliability in the system allocated to him. To prepare such a server, it is very important to think carefully about the architecture of the virtual machine, on which all the necessary means of data exchange and integration for the operation of the service will be installed, and protection against network threats that can disrupt the server's performance will be configured. The purpose of the work is to create a virtual machine architecture protected from network threats, which provides customers with access to the iTOP CMDB system. Despite the fact that there can be any number of customers, the iTOP CMDB system should be provided to each customer with its own version, which he can administer. The user can log in using an Internet browser by entering the name of his organization as a domain. The authors present a demonstration of the iTOP CMDB system, which is located on a virtual machine protected from network threats.
Keywords: virtual machine, architecture, firewall, iTOP CMDB system, server, network threat, network attack, IP address, firewall, request
The concept of a two-dimensional associative masking mechanism, introduced earlier and necessary for further consideration, is used to protect the data of cartographic scenes represented by point, linear and areal objects. The masking mechanism is the basis of associative steganography. In this case, the objects and coordinates of the scene are represented by code words in the alphabet of postal symbols and are masked with the further formation of stegocontainers. A set of masks is a secret key used further to recognize a scene represented in a protected form by a set of stegocontainers. The article deals with the organization of specialized DBMS for the protection of cartographic scene data with the introduction of two levels of such DBMS – server and client. Mono- and multicluster organization of request processing is offered for the server part of the DBMS. Practical recommendations on the use of mono- and multiclusters are given.
Keywords: associative steganography, masking, stegomessage, cartographic databases, parallel DBMS, mono- and multicluster, scene analysis, cryptography, stegostability, information security
The work is devoted to the problem of decision support in the field of information security. The aim of the work is to build (within the framework of the game-theoretic approach) an iterative procedure for determining a mixed game strategy for ensuring information security under the uncertainty of the state of the protected object and countering an intruder. The use of the methodological apparatus of simulation modeling (along with the use of the Brown-Robinson fictitious enactment method) is due to the possible non-Poisson type of event flows leading to a change in the state of the protected object, as well as the complexity of solving stochastic games with three participants. The application of the developed procedure makes it possible to increase the scientific validity of managerial decisions on the choice of protection strategies for stochastically dynamic (changing their state randomly) objects.
Keywords: information security, uncertainty, counteraction, game-theoretic approach, simulation modeling
An approach for cosntruction of stream ciphers based on new type of cipher gamma generators with a non-linear (fuzzy) shift register selection function is proposed. The best configuration of generator is selected for generating a gamma whose properties are closest to white noise. It is shown that the proposed approach makes it possible to generate a gamma sequence with a quality that exceeds a number of other classical generators.
Keywords: cryptography, stream cipher, gamma, PNSG, random test, fuzzy logic,membership function, linguistic variable, defuzzification, linear feedback shift register
This article discusses the use of universal adversarial as well as to improve the effectiveness of protection systems against robots and spam. In particular, the key features that need to be taken into account to ensure an optimal level of protection against robots and spam are considered. It is also discussed why modern methods of protection are ineffective, and how the use of universal adversarial attacks can help eliminate existing shortcomings. The purpose of this article is to propose new approaches and methods of protection that can improve the effectiveness and stability of protection systems against robots and spam.
Keywords: machine learning, clustering, data recognition, library Nanonets, library Tesseract
In today's information environment, characterized by the increasing digitalization of various aspects of daily life, information security is of paramount importance. Many types of personal information, including identity, financial and medical records, are digitally stored. Organizations need to protect their intellectual assets, sensitive data and business information from competitors and insider threats. The synergistic approach of combining cryptography and steganography provides increased sophistication in analyzing transmitted data and reduces its vulnerability to attacks based on statistical analysis and other pattern detection techniques. Associative Steganography is a methodology that integrates the basic principles of steganography and cryptography to provide strong data protection. The development of a software application designed for associative file protection can be applied in a wide range of areas and has significant potential in the context of information security. In this article the prerequisites for creating this application are discussed, the program design of the application is described using UML (Unified Modeling Language) and aspects of its implementation are analyzed. In addition, the results of testing the application are investigated and further prospects for the development of associative steganography are proposed.
Keywords: associative steganography, stego messaging, stego resistance, cryptography, information security, Unified Modeling Language, .NET Framework runtime, Windows Presentation Foundation, DeflateStream, BrotliStream, MemoryStream, parallel programming