Identification of Players Ranking in E-Sport - MDPI
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
applied sciences Article Identification of Players Ranking in E-Sport Karol Urbaniak 1 , Jarosław Watróbski ˛ 2, * and Wojciech Sałabun 1, * 1 Research Team on Intelligent Decision Support Systems, Department of Artificial Intelligence Methods and Applied Mathematics, Faculty of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, Szczecin ul. Żołnierska 49, 71-210 Szczecin, Poland; karol-urbaniak@zut.edu.pl 2 Department of Information Systems Engineering in the Faculty of Economics, Finance and Management of the University of Szczecin, Mickiewicza 64, 71-101 Szczecin, Poland * Correspondence: jaroslaw.watrobski@usz.edu.pl (J.W.); wojciech.salabun@zut.edu.pl (W.S.); Tel.: +48-91-449-5580 (W.S.) Received: 20 August 2020; Accepted: 22 September 2020; Published: 27 September 2020 Abstract: Human activity is moving steadily to virtual reality. More and more, people from all over the world are keen on growing fascination with e-sport. In practice, e-sport is a type of sport in which players compete using computer games. The competitions in games, like FIFA, Dota2, the League of Legends, and Counter-Strike, are prestigious tournaments with a global reach and a budget of millions of dollars. On the other hand, reliable player ranking is a critical issue in both classic and e-sport. For example, the “Golden Ball” is the most valuable prize for an individual football player in the whole football history. Moreover, the entire players’ world wants to know who the best player is. The position of each player in the ranking depends on the assessment of his skills and predispositions. In this paper, we studied identification of players evaluation and ranking obtained using the multiple-criteria decision-making based method called Characteristic Objects METhod (COMET) on the example of the popular game Counter-Strike: Global Offensive (CS: GO). We present a range of advantages of the player evaluation model created using the COMET method and, therefore, prove the practicality of using multi-criteria decision analysis (MCDA) methods to build multi-criteria assessment models in emerging areas of eSports. Thus, we provide a methodical and practical background for building a decision support system engine for the evaluation of players in several eSports. Keywords: e-sport; ranking; COMET method 1. Introduction Sport has always played an essential role in every culture in the past and still does in current times. Everybody knows conventional sports, such as football, volleyball, basketball, etc., but there are new sports appearing that are increasingly expanding in popularity. One of them is Electronic Sports, also known as eSports or e-sports [1]. At the beginning of the 90s, the history of e-sport began. During this decade, it became more and more popular, and the number of players increased significantly [2–5]. E-sport is a type of sport in which players compete in computer games [6,7]. The players’ activities are only restrained from being placed in the virtual environment [3]. E-sport is exciting entertainment for many fans, but it is also a source of income for the professional players and the whole e-sport organization. Professional players usually belong to different e-sport organizations and represent their teams competing in omnifarious tournaments, events, and international championship [2–4,8]. The competition takes place online or through so-called local networks (LAN). The most encounters take place in a LAN network, where both smaller and larger numbers of computers are connected in one building allowed for lower in-game latency between Appl. Sci. 2020, 10, 6768; doi:10.3390/app10196768 www.mdpi.com/journal/applsci
Appl. Sci. 2020, 10, 6768 2 of 35 gamers [2,6,8–10]. In e-sports, the viewership is crucial. The gameplay should be designed to attract and emotionally engage the participation of as many gameplay observers as possible. E-sport is a lifestyle for computer gamers. It becomes a real career path from which you can start, develop, and build your future. People still consider e-sport very conservatively. They think of it as something trivial and frivolous. While some people do not take it seriously all the time, spectator count records, as well as prize pool records, are regularly updated during major tournaments, reaching millions watching Counter-Strike: Global Offensive (CS: GO) [11]. It is full of opportunities, awards, travel, and also requires great sacrifice. It is incredibly demanding to reach a world-class level [1]. Actually, it looks like a full-time job. Players usually train 8 hours a day or more. They use the computer as a tool to achieve success in a new field. To become a professional, people have to work hard without any excuses. A player is considered as professional when he is hired by an organization that pays for his work representing that entity by appearing at events, mostly official tournaments on a national or international level [8]. E-sport has become an area that requires so much precision that even milliseconds determine whether to win or lose. Pointing out the importance of specialized skills, such as hand-eye coordination, muscle memory, or reaction time, as well as strategical or tactical in-game knowledge, increases achieving success in that area [12]. Hand-eye coordination is the ability of the vision system to coordinate the information received through the eyes to control, guide, and direct the hands in the accomplishment of a given task, such as handwriting or catching a ball [13]. The aim of e-sports is defeating other players. It could be done by neutralizing them, or just like in sports games, by racing as fast as possible to cross the finish line before your opponents. In addition, the win may be achieved by scoring the most points [2,3]. One of the most popular genres of eSports games is First-Person-Shooter (FPS) [2,6,8,14]. The virtual environment of the game is approached from the perspective of the avatar. The only thing visible of the avatars on the screen is the hands and the weapons they handle [2]. Counter-Strike is an FPS multiplayer game created and released by Valve Corporation and Hidden Path Entertainment [5,6]. There were many other versions of the game, which did not achieve much success. Valve realized how popular e-sport had become and create the new Counter-Strike game we play today, wholly tailored for competition, known as CS: GO. The rules in CS: GO are uncomplicated. There are two teams in the game: terrorists (T) and counter-terrorists (CT). Each team aims to eliminate the opposing team or to perform a specific task. The first one’s target is to plant the bomb and let it explode, while the second’s is to prevent the bomb from being planted and/or exploding. Additionally, the game consists of 30 rounds, where each last about 2 min. After 15 rounds, players need to switch teams. Then, the team that first achieves 16 rounds is the winner. When the game does not end in 30 rounds, it goes to overtime. It consists of a best of six rounds, three on each side. The team that gets to 4 rounds wins. If there is another draw situation, the same rule applies until a winner is found [4,8]. The team’s economy is concerned with the amount of money that everybody on the team have pooled cooperatively in order to buy new weapons and equipment. Winning a round by eliminating the entire enemy team provides the winners with USD 3250 per player, plus USD 300 if the bomb is planted by a terrorist. Winning by time on the counter-terrorist’s side rewards players USD 3250, and winning the round with a defusal (CT) or detonation of the bomb (T) rewards USD 3500. However, if the terrorists run out of time before killing all the oponnents or planting the bomb, they will not come in for any cash prize. If a round is lost on the T-side, but they still manage to plant the bomb, the team will be awarded USD 800 in addition to the current round loss streak value. The money limit for each individual player in competitive matches is equal to USD 16.000 [15]. For gamers, the foundation of e-sports is the glory of winning, the ability to evoke excitement in people, and the privilege of being perceived as one of the best players in the world [2,8]. In the past, players had to bring their equipment to LAN events, while having fun in a hermetically sealed society. They could then eventually win small cash prizes or gadgets. Now, these players are winning a prize pool of over USD 500 thousand, performing on big stages full of cameras and audience [1]. The increase in popularity of e-sports was not only impressive but also forced many business people,
Appl. Sci. 2020, 10, 6768 3 of 35 large corporations, and television companies to become interested in this dynamically developing market [8]. E-sport teams are often headed by traditional sports organizations and operated by traditional sports media. Tournaments are organized by conventional sports leagues highlighting the growing connections between classical sport and e-sport [16]. In recent years, e-sport has become one of the fastest-growing forms of new media driven by the growing origins of games broadcasting technologies [7,17]. E-sport and computer gaming have entered the mainstream, transforming into a convenient way of entertainment. In 2019, 453.8 million people had been watching e-sport worldwide, which increased by about 15% compared to 2018. It consisted of 201 million regular and 252 million occasional viewers. Between 2019 and 2023, total e-sport viewership is expected to increase by 9% per year, from 454 million in 2019 to 646 million in 2023. In six years, the number of watchers will almost double, reaching 335 million in 2017. In the current economic situation, global revenues from e-sport may reach USD 1.8 billion by 2022, or even an optimistic USD 3.2 billion. Hamari in Reference [3] claims that with the development of e-sport, classic sport is becoming a computer-based form of media and information technology. Therefore, e-sport is a fascinating subject of research in the field of information technology. The accurate player ranking is a crucial issue in both classic [18] and e-sport [19,20]. The result of a classification, calculated based on wins and losses in a competitive game, is often considered to be an indicator of a player’s skills [20]. Each player’s position in the ranking is strictly determined by their abilities, predispositions, and talent in the field of represented discipline [16]. However, there are more than just statistics to prove the player’s value and ability. Many professional players play a supporting role in their teams, and winning even a single round is a priority. What matters first and foremost is the team’s victory, unlike the ambitions of the individual units. The team members have to work collectively, like one organism, and everyone has to cooperate to achieve the team’s success and the best possible results [21]. That is why the creation of accurate player ranking is a problematic issue. In this paper, we identify the model to generate a ranking of players in the popular e-sport game, i.e., Counter-Strike: Global Offensive (CS: GO), using the Characteristic Objects METhod (COMET). The obtained ranking will be compared to Rating 2.0, which is the most popular for CS: GO game [22,23]. This study case facilitates the application of COMET in the new field of application. The COMET is a novel method of identifying a multi-criteria expert decision model to solve decision problems based on a rule set, using elements of the theory of fuzzy sets [23,24]. Unlike most available multi-criteria decision analysis (MCDA) methods, COMET is completely free of the rank reversal problem. The advantages of this technique are both an intuitive dialogue with the decision-maker and the identification of a complete model of the modeling area, which is a vital element in the application of the proposed approach in the methodological and algorithmic engine in the area of computer games and, more specifically, e-sport. The most important methodological contribution is the analysis of the significance of individual inputs and outputs, which enables the analysis of the dependence of results on individual input data. Similarly, as in the Analytic Hierarchy Process (AHP) method, it is to serve as a possibility of extended decision analyzing in order to explain what influence particular aspects had on the final result. The Spearman correlation coefficient is used to measure the input-output dependencies, which extends the COMET technique to include new interpretative possibilities. It is important to note that this is significant as the COMET method itself does not apply any significance weights. The proposed approach makes it possible to estimate the significant weights.
Appl. Sci. 2020, 10, 6768 4 of 35 The justification of the undertaken research has both theoretical and practical dimensions. MCDA methods themselves have proved to be powerful tools to solve different practical problems [25,26]. In particular, the construction of assessment models and rankings using MCDA methods is extensively discussed in the literature [27–30]. Examples of decision-making problems successfully solved with the usage of different multi-criteria methods include the assessment of environmental effects of marine transportation [31], innovation [32,33], sustainability [34,35], evaluation of renewable energy sources (RES) investments [36,37] or a broad environmental investments assessment [38], and industrial [39], as well as personnel assessment [40] of preventive health effects [41] or even evaluation of medical therapy effects [42,43]. It is also worth noticing the additional motivation of the research that MCDA methods have already shown their utility in building assessment models in traditional sports. For instance, the MCDA-based evaluation of soccer players was conducted in Reference [44], where Choquet method was used to evaluate the performance of sailboats [45]. Preference ranking organization method for enrichment evaluation II (PROMETHEE II)-based evaluation model of football clubs was proposed in Reference [46], while application of AHP/SWOT model in sport marketing was presented in Reference [47]. MCDA-based, multi-stakeholder perspective was handled in the evaluation of national-level sport organizations in Reference [48]. Both the examples provided and state of the art presented in Reference [49] clearly show the critical role of MCDA methods in the area of building assessment models and rankings in the field of sport. When we analyze the area of the e-sport in addition to the dominant trends, including economic research [50], sociological [3,51], psychological [52] or conversion-oriented research, and user experience (UX) [53], we observe attempts to use quantitative methods in the search for the algorithmic engines of digital products and games. For example, ex-post surveys and statistical-based approach were used to manage the health of the eSport athlete [54]. Personal branding of eSports athletes was evaluated in Reference [55]. In Reference [56], streaming-based performance indicators were developed, and players’ behavior and performance were assessed. The research focused on win/live prediction in multi-player games was conducted in Reference [57]. The study aimed to identify the biometric features contributing to the classification of particular skills of the players was presented in Reference [58]. So, far, only one example of MCDA-based method usage in e-sport player selection and evaluation has been proposed [58]. The authors indicate the appropriateness of fuzzy MCDA in the domain of e-sport player selection and assessment. The above literature studies show a distinct research gap, including the limited application of MCDA in e-sport domain. Besides, the paper addresses the following essential theoretical and practical research gaps: • extension of the COMET method by the stage of analyzing the significance of individual input data and decision-making sub-models to the final form of a ranking of decision-making options • transferring the methodological experience of using MCDA methods to the important and promising ground for building decision support systems in the area of eSports; • by identifying a domain-specific proper reflecting modeling domain (e-sport player evaluation), the form of which (both within the family of evaluation criteria and alternatives) is significantly different from that of classical sports; and • analysis and study of the adaptation and examination of MCDA methods usage as an algorithmic methodological engine of decision support system (potentially providing additional functionality to a range of available digital products and games). The rest of the paper is organized as follows: MCDA foundations and simple comparison of MCDA techniques are presented in Section 2. Section 3 contains preliminaries of the fuzzy sets theory. The explanation of the definitions and algorithms of the multi-criteria decision-making method named COMET is given in Section 4. Section 5 introduces the results of the study, and, in Section 6, the discussion about the differences in both rankings. In Section 7, we present the conclusions and future directions.
Appl. Sci. 2020, 10, 6768 5 of 35 2. MCDA Fundations Multi-criteria decision support aims to achieve a solution for the most satisfactory decision-maker and at the same time to meet a sufficient number of often conflicting goals [59]. The search for such solutions requires the consideration of many alternatives and their evaluation against many criteria, as well as the transfer of the subjectivity of evaluation (e.g., the relevance of the criteria by the decision-maker) into a target model. Multi-criteria Decision Analysis (MCDA) methods is dedicated to solving this class of decision problems. During many years of research, two schools of MCDA methods have been developed. First, American MCDA school is based on the assumption that the decision-maker’s preferences are expressed using two basic binary relations. When comparing the decision-making options, undifferentiated relations and preferences may occur. In the case of the European MCDA school, this set has been significantly extended by introducing the so-called “superiority relationship”. The relation of superiority, apart from the two basic relations mentioned above, introduces the relation of weak preference of one of the variants to another and the relation of incomparability of the decision options. In the case of the American school methods, the result of the comparison of variants is determined for each criterion separately, and the effect of the aggregation of the grades is a single, synthesized criterion, with the order of variants being full. The methods of the American school of decision support in the vast majority using the function of value or utility. The best-known methods of the American school are Multi Attribute Utility Theory (MAUT), AHP, Analytic Network Process (ANP), Simple Multi-Attribute Rating Technique (SMART), UTA, Simple Multi-Attribute Rating Technique (MACBETH), or Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). In contrast to the American school (which is at the same time an accusation of “European School”-oriented researchers), the algorithms of the European School methods are strongly oriented on faithful reflection of the decision-maker’s preferences (including their imprecision). The aggregation of the assessment results in itself is done with the use of the relation of superiority, and the effect of aggregation in the vast majority of methods is a partial order of variants (the effect of using the relation of incomparability). The primary methods of the European School are ELimination Et Choice Translating REality (ELECTRE) and PROMETHEE [60]. What is important, among them only the Promethee II method as a result of the aggregation of assessments provides a full order of decision options. Other methods belonging to the MCDA European School are, for example, ORESTE, REGIME, ARGUS, TACTIC, MELCHIOR or PAMSSEM. An important additional difference between the indicated schools is also the fact that there is a substitution of criteria in the methods using synthesis to one criterion. In contrast, the methods of the European School are considered non-compensatory [61]. The third group of MCDA methods are based on decision-making rules. The formal basis of these methods is fuzzy set theory and approximate set theory. Algorithms of this group of methods consist in building decision rules and consequences, and, using these rules, variants are compared and evaluated, and the final ranking is generated. Examples of MCDA rules are DRSA (Dominance-based Rough Set Approach) or Characteristic Objects Method (COMET) [24]. The COMET uses fuzzy triangular numbers to build criteria functions. A set of characteristic objects is created Using the core values of particular fuzzy numbers. So, it is a method based on fuzzy logic mechanisms. Additionally, it can also support problems with uncertain data. Table 1 shows the comparison of the COMET method with other MCDA problems. The most important is that the COMET technique is working without knowing the criteria weights. The decision-maker’s task is to compare pairs of characteristic objects. Based on these comparisons, a model ranking is generated. The model variants are the base for building a fuzzy rule database. When the considered alternatives are given to the decision-making system, the appropriate rules are activated, and the aggregated evaluation of the variant is determined as the sum of the degree products in which the variants activate the individual rules [62].
Appl. Sci. 2020, 10, 6768 6 of 35 Table 1. Comparison of the Characteristic Objects Method (COMET) with other multi-criteria decision analysis (MCDA) methods. Perf. of the v. Uncert. Method Name W. Usage Weights Type Type of Uncertainty Measurement Handling AHP Yes relative relative No - COMET No - quantitative Yes input data ELECTRE I Yes quantitative qualitative No - ELECTRE IS Yes quantitative quantitative Yes DM preferences ELECTRE TRI Yes quantitative quantitative Yes DM preferences Fuzzy AHP Yes relative relative Yes input data Fuzzy TOPSIS Yes quantitative quantitative Yes input data Fuzzy VIKOR Yes quantitative quantitative Yes input data IDRA Yes quantitative quantitative No - PROMETHEE I Yes quantitative quantitative Yes DM preferences PROMETHEE II Yes quantitative quantitative Yes DM preferences TOPSIS Yes quantitative quantitative No - VIKOR Yes quantitative quantitative No - Additionally, the literature also indicates groups of so-called basic methods (e.g., lexicographic method, maximin method, or additive weighting method) and mixed methods, e.g., EVAMIX [63] or QUALIFLEX, as well as Pairwise Criterion Comparison Approach (PCCA). Examples of the latter are methods: MAPPAC, PRAGMA, PACMAN, and IDRA [64]. 3. Fuzzy Set Theory: Preliminaries Fuzzy set theory is a very valuable strategy to control and model in several scientific fields. Modeling using Fuzzy sets has proven to be an efficient alternative of forming multicriteria decision problems. The necessary concepts of the Fuzzy Set Theory can be presented using the following eight definitions [13]: Definition 1. The fuzzy set and the membership function—the characteristic function µ A of a crisp set A ⊆ X assigns a value of either 0 or 1 to each member of X, and the crisp sets only allow a full membership (µ A ( x ) = 1) or no membership at all (µ A ( x ) = 0). This function can be generalized to a function µ à so that the value assigned to the element of the universal set X falls within a specified range, i.e., µ à : X → [0, 1]. The assigned value indicates the degree of membership of the element in the set A. The function µ à is called a membership function and the set à = ( x, µ à ( x )), where x ∈ X, defined by µ à ( x ) for each x ∈ X is called a fuzzy set. Definition 2. Triangular fuzzy number (TFN)—A Fuzzy set A, e defined on the universal set of real numbers 0} = [ a, b]. (2) Definition 4. The core of a TFN A—This e is the singleton (one-element Fuzzy set) with the membership value equal to one. e) = { x : µ A C( A e( x ) = 1} = m. (3)
Appl. Sci. 2020, 10, 6768 7 of 35 Definition 5. The Fuzzy rule—The single Fuzzy rule can be based on tautology modus ponens. The reasoning process uses logical connectives IF-THEN, OR, and AND. Definition 6. The rule base—The rule base consists of logical rules determining causal relationships existing in the system between Fuzzy sets of its inputs and output. Definition 7. The T-norm operator—the T-norm operator (intersection) is a T function modeling the AND intersection operation of two or more fuzzy numbers, e.g., A e and B. e e( x ) ANDµ B µA e( x ) · µ B e(y) = µ A e(y). (4) Definition 8. The S-norm operator—The S-norm operator (union), or T-conorm is an S function modeling the OR union operation of two or more fuzzy numbers, e.g., A e and B. e µ Ã ( x )ORµ B̃ (y) = (µ Ã ( x ) + µ B̃ (y)) ∧ 1. (5) 4. The Characteristic Objects Method COMET (Characteristic Objects Method) is a very simple approach, most commonly used in the field of sustainable transport [34,35,62], interactive marketing [65,66], sport [67], medicine [68], in handling the uncertain data in decision-making [69,70], and banking [71]. Carnero, in Reference [72], suggests using COMET method as future work to improve her waste segregation model. The COMET is an innovative method of identifying a multi-criteria expert decision model to solve decision problems based on a rule set, using elements of the theory of fuzzy sets [24,68]. The COMET method distinguishes itself from other multiple-criteria decision-making methods by its resistance to the rank reversal paradox [73]. Contrary to other methods, the assessed alternatives are not being compared here, and the result of the assessment is obtained only based on the model [24]. The whole decision-making process by using the COMET method is presented in Figure 1. The formal notation of this method can be presented using the following five steps [34]: Figure 1. The procedure of the Characteristic Objects Method (COMET) to identify decision-making model. Step 1. Define the space of the problem – an expert determines dimensionality of the problem by selecting number r of criteria, C1 , C2 , ..., Cr . Subsequently, the set of fuzzy numbers for each criterion Ci is selected, i.e., C̃i1 , C̃i2 , ..., C̃ici . In this way, the following result is obtained: C1 = {C̃11 , C̃12 , ..., C̃1c1 } C2 = {C̃21 , C̃22 , ..., C̃2c1 } (6) ................................. Cr = {C̃r1 , C̃r2 , ..., C̃rcr },
Appl. Sci. 2020, 10, 6768 8 of 35 where c1 , c2 , ..., cr are numbers of the fuzzy numbers for all criteria. Step 2. Generate the characteristic objects—The characteristic objects (CO) are obtained by using the Cartesian Product of fuzzy numbers cores for each criteria as follows: CO = C (C1 ) × C (C2 ) × ... × C (Cr ). (7) As the result, the ordered set of all CO is obtained: CO1 = C (C̃11 ), C (C̃21 ), ..., C (C̃r1 ) CO2 = C (C̃11 ), C (C̃21 ), ..., C (C̃r2 ) , (8) ................................................... COt = C (C̃1c1 ), C (C̃2c2 ), ..., C (C̃rcr ) where t is a number of CO: r t= ∏ ci . (9) i =1 Step 3. Rank the characteristic objects—the expert determines the Matrix of Expert Judgment (MEJ). It is a result of pairwise comparison of the characteristic objects by the expert knowledge. The MEJ structure is as follows: α11 α12 ... α1t α α22 ... α2t MEJ = 21 , (10) ... ... ... ... αt1 αt2 ... αtt where αij is a result of comparing COi and CO j by the expert. The more preferred characteristic object gets one point and the second object get zero points. If the preferences are balanced, the both objects get half point. It depends solely on the knowledge of the expert and can be presented as: ( 0.0, f exp (COi ) < f exp (CO j ) αij = 0.5, f exp (COi ) = f exp (CO j ) , (11) 1.0, f exp (COi ) > f exp (CO j ) where f exp is an expert mental judgment function. Afterwards, the vertical vector of the Summed Judgments (SJ) is obtained as follows: t SJi = ∑ αij . (12) j =1 The last step assigns to each characteristic object an approximate value of preference. In the result, the vector P is obtained, where i-th row contains the approximate value of preference for COi . Step 4. The rule base—each characteristic object and value of preference is converted to a fuzzy rule as follows detailed form: IF C (C̃1i ) AND C (C̃2i ) AND ... THEN Pi . (13) In this way, the complete fuzzy rule base is obtained, that approximates the expert mental judgement function f exp (COi ) Step 5. Inference and final ranking—The each one alternative is a set of crisp numbers corresponding to criteria C1 , C2 , ..., Cr . It can be presented as follows: Ai = { a1i , a2i , ..., ari }. (14)
Appl. Sci. 2020, 10, 6768 9 of 35 5. Results The detailed steps of the research to identify players carried out according to the methodical framework is presented in Figure 2. It is worth mentioning, once again, that algorithmic background and methodical approach provide COMET method. Figure 2. Research procedure. The identified model creates a ranking, which is compared with Rating 2.0 that was proposed by Half-Life Television (HLTV). It is a news website that covers professional CS: GO news, tournaments, statistics, and rankings [23]. The obtained ranking is more natural to interpret. Each player assessment has three additional parameters. Many parameters influence the player’s performance, including the evaluation of his skills and predispositions. For instance, with player’s age, the drop-off in reaction time makes it hard for them to compete and harder to aim the head of moving target. High percentage of headshots reflects the shooting skills and is a kind of prestige [74]. There are plenty of criteria, which could be used to create an evaluation model. For instance, the damage per round given by grenades, the total number of rounds played by a player, which could inform us about the player’s experience or a high percentage of headshots, that was mentioned earlier. Other criteria have been chosen because of their greater impact on the assessment of the individual skills of each player. Especially important are the C1 and C4 criteria. They inform us that eliminating the player is smaller than the possibility that he will kill the enemies [74]. Many parameters influence the player’s performance, including the assessment of his skills and predispositions. For instance, with player’s age, the drop-off in reaction time makes it hard for them to compete and harder to aim the head of moving target. Hight percentage of headshots reflects the shooting skills and is a kind of prestige. Therefore, the following six criteria have been selected [22,23]: • C1 —Average kills per round, the average number of kills scored by the player during one round; • C2 —Average damage per round, mean damage inflicted by a player during one round; • C3 —Total kills, the total number of kills gained by the player; • C4 —K/D Ratio, the number of kills divided by number of deaths; • C5 —Average assists per round, the mean number of assists gained by the player during one round; and • C6 —Average deaths per round, the average number of deaths of a player during one round. There are plenty of criteria, which could be used to create an evaluation model. For instance, the damage per round given by grenades, the total number of rounds played by a player, which could inform us about the player’s experience or a high percentage of headshots, that was mentioned earlier. However, a set of six criteria have been chosen because of their greater impact on the assessment of the individual skills of each player. The collected data for all applied criteria and Rating 2.0 assessment are derived from the official HLTV website and dated June 2019. Especially important are the C1 and C4 criteria. They inform us that the chance to eliminate the player is smaller than the possibility that he will kill the enemies. The economy of a player depends on how much he has spent on weapons and armor, the kill awards that have been received per elimination (based on weapon type), the status of bomb planting or defusing, and finally who won the round [15]. Average kills per round (C1 ) is always an important criterion because, by fragging (killing an enemy), you can eliminate first of all the threat from your opponent. For each elimination you get, depending on the weapon used, the amount of money needed to buy ammunition, equipment, grenades, and other utilities at a later stage of the game.
Appl. Sci. 2020, 10, 6768 10 of 35 For instance, elimination with a sniper rifle (AWP) is the least economically profitable and gives the player only USD 100, while almost any pistol gives 300 dollars reward, and shotguns, which are the most cost-effective, give even up to USD 900 in cash prize. Additionally, by killing enemies, they lose the weapons they acquired, thus losing all equipment, such as kevlar with a helmet or defuse kit (CT). Criterion C1 is a profit type criterion, where the value increase means the preference increase. Based on the information about players statistics from the HLTV database for best 40 professional players, for C1 , the lowest obtained value is 0.72, the highest 0.88, and the average value is equal to 0.78. As the number of Average damage per round increases, the probability of killing an enemy increases, as well. Moreover, the player is more priceless and useful for a team when he deprives the enemy team of the precious health points and makes gaining frag much easier for his teammates. There was a situation during the PGL Major Kraków 2017 event when a professional player Mikhail "Dosia" Stolyarov from Gambit Esports during the grand final against Immortals team done some unbelievable action. His team (on CT side) was going to lose the round because there was not enough time to defuse the bomb versus three opponents. Dosia knew it was impossible to win, but he came up with an idea and threw a grenade to give some extra damage to players, who were saving their weapons. It was a few seconds before the detonation of the bomb, which takes many health points (HP) from players located in an area of the explosion. Doing it, he contributed to the death of two players, which lost precious weapons and equipment, forcing them to spent extra money in the next round. That was an example of the validity of this criterion on the professional field of CS: GO. Criterion C2 is characterized by a positive correlation to player value. For criterion C2 , the lowest result is 75.60, the highest 88.20, and the mean value is equal to 82.70. Criterion C3 determine the total number of kills scored by the player, which could signify that the player plays a lot and has a background in Counter-Strike, like the legendary player Christopher “GeT RiGhT” Alesund from Sweden or Filip “Neo” Kubski from Poland. When C3 value increases, the player’s evaluation also improves. As the total number of kills increases, the player’s skill level and overall experience develop, as well, playing later against much better enemies. For criterion C3 , the lowest result is 1516.00, the highest 4151.00, and the mean value is equal to 2514.90. Frankly, it is not the most critical parameter because players with much less number of frags could play as good or even better. It depends on individual predispositions and the innate potential of the gamer. Criterion C4 is probably the most prominent rate of players’ abilities in CS: GO. It is a profit type criterion, like the previous three criteria. It informs us that the chance to eliminate the player is smaller than the possibility that he will kill the enemies. If the total number of kills is more significant than the overall number of deaths, the player’s skill level is getting more superior, and the gamer improves every time he plays. For professional gamers, the criterion C4 obtained the lowest result equal to 1.15, the highest 1.51, and the mean value is equal to 1.25. Even the worst K/D Ratio value in this set of players is a great result. Obtaining assistance in team games is proof of successful and productive team play. In CS: GO, assists are also received in this way because it is an evident proof that the player was close to making an elimination on the opponent. However, something went wrong and only deprived him of most of the health points in the end without gaining a single frag. Then, he gives his teammates the opportunity for an easy kill, but he only got an assist instead of a full frag on his account. Often, players who play a supporting role get a significant amount of assists because they contribute to getting eliminations on the rival by, for example, blinding him with a flashlight, helping his colleagues. For criterion C5 , the lowest result is 0.09, the highest 0.18, and the average value is equal to 0.13. As it is known in FPS games, the most important thing is to eliminate your opponents instead of being killed. By analyzing the Average number of deaths per round, we can conclude which player loses the most shooting duels and has to observe the actions of his teammates only as an observer. It could show us the weakness of the player and skill shortages that will allow the best ones to be distinguished. It is a cost-type criterion, which means the value increase indicates the preference decrease. For criterion C6 , the lowest result is 0.52, the highest 0.68, and the mean value is equal to 0.63.
Appl. Sci. 2020, 10, 6768 11 of 35 The values of selected criteria C1 –C6 , positions and names of alternatives are presented in the Table 2. In this study case, the considered problem is simplified to a structure, which is presented in Figure 3. Table 2. The performance table of the alternatives and selected criteria. Pos. Name C1 C2 C3 C4 C5 C6 1 s1mple 0.88 86.6 1958 1.50 0.09 0.59 2 ZywOo 0.83 85.3 4151 1.40 0.12 0.59 3 Jame 0.78 79.3 3505 1.51 0.09 0.52 4 Jamppi 0.83 83.1 2851 1.30 0.1 0.64 5 huNter 0.80 88.2 4100 1.22 0.15 0.66 6 vsm 0.80 86.6 2420 1.22 0.13 0.65 7 meyern 0.82 83.8 1728 1.28 0.12 0.64 8 Kaze 0.78 80.7 1750 1.32 0.1 0.60 9 Hatz 0.76 81.8 2017 1.28 0.15 0.60 10 Sico 0.76 78.4 1876 1.36 0.13 0.56 C1 Kills per round Effectiveness per P1 round assesment C2 model Damage per round C3 Total kills CS:GO Frag gaining P2 Players P assesment model assesment C4 K/D Ratio model C5 Assists per round Failures per round P3 assesment model C6 Deaths per round Figure 3. The hierarchical structure of the players ranking assessment problem. In that way, we have to identify three related models, where each one requires a lot smaller number of queries to the expert. The final decision model consists of three following models, where, for each one, nine characteristic objects and 36 pairwise comparisons are needed: • P1 —Effectiveness per round assessment model with two inputs; • P2 —Frag gaining assessment model with two inputs; • P3 —Failures per round assessment model with two inputs. In the Effectiveness per round assessment model (P1 ), we aggregate two essential criteria, like Average kills per round (C1 ) and Average damage per round (C2 ), as input values. The output value is our player evaluation for model P1 , and the lowest result is 0.23, the highest 0.88, and the mean value is equal to 0.45 for top 40 professional players in CS: GO. The input values of the Frag gaining assessment model (P2 ) are two significant criteria, like Total kills (C3 ) and K/D Ratio (C4 ). The outcome value is our player assessment for model P2 , and the lowest result is 0.00, the highest 0.84, and the mean value is equal to 0.45. In the Failures per round assessment model (P3 ), we connect two crucial criteria, like Average assists per round (C5 ) and Average deaths per round (C6 ). The output value is our player evaluation for model P3 , and the lowest result is 0.25, the highest 0.78, and the mean value is equal to 0.44. The model will be validated based on the results obtained from the official HLTV website for the top 10 professional CS: GO players for June 2019, which are presented in Table 2. To identify the
Appl. Sci. 2020, 10, 6768 12 of 35 final model for players assessment, we have to determine the three following assessment models, i.e., Effectiveness per round, Frag gaining, and Failures per round. 5.1. Effectiveness per Round Assessment Model This model evaluates the efficiency in eliminating and injuring enemies, which is one of the essential elements of CS: GO. The expert identified two significant criteria for the Effectiveness per round assessment model: Average kills per round, which is the mean number of frags scored by the player pending one round, and Average damage per round, that is mean damage delivered by a player during one round. Both of them are a profit type criteria, where the value increase means the preference increase. In such complex problems, the relationship is sporadically linear. Table 3 presents the values of the criteria C1 and C2 and the P1 assessment model. Based on the presented data, it can be determined that the best value of the criteria C1 was achieved by ‘Simple’, which is equal to 0.88, while the worst result was obtained by ‘dexter’ with the value equal to 0.72. In the case of the second criterion, the best score was given to ‘huNter’ with 88.2, and the lowest score was received by ‘xsepower’ with value equal to 75.6. Analyzing the results of the effectiveness per round assessment model (P1 ), we can conclude that the highest score P1 was obtained by ‘Simple’, and is equal to 0.8825. The triangular fuzzy numbers of criterium C1 are presented in Figure 4, while C2 is presented in Figure 5. C11 C12 C13 1 0.8 0.6 µ 0.4 0.2 0 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9 C1 Figure 4. Visualization of Average kills per round (C1 ) and triangular fuzzy numbers 0.70 (C11 ), 0.80 (C12 ), and 0.90 (C13 ). C21 C22 C23 1 0.8 0.6 µ 0.4 0.2 0 70 72 74 76 78 80 82 84 86 88 90 C2 Figure 5. Visualization of Average damage per round (C2 ) and triangular fuzzy numbers 70 (C21 ), 80 (C22 ), and 90 (C23 ). In the considered set of parameters, there were players with: Average kills per round (C1 ) with the values of the support of the triangular fuzzy number from 0.7 (C11 ) to 0.9 (C13 ) and the core valued 0.8 (C12 ); Average damage per round (C2 ) with the values of the support of the triangular fuzzy number from 70 (C21 ) to 90 (C23 ) and the core valued 0.8 (C22 ) health points. Based on the data presented in the Table 4, it turned out that the output P1 takes values from 0.1 to 0.9. Therefore, the variable P1 will take two values. Both of them will also be determined as triangular fuzzy numbers. They were displayed in Figure 6. The comparison of the 36 pairwise of the 9 characteristic objects were executed.
Appl. Sci. 2020, 10, 6768 13 of 35 Consequently, the Matrix of Expert Judgment (MEJ) was defined as (15), where each αij value was calculated using Equation (11). 0.5 0 0 0 0 0 0 0 0 1 0.5 0 0 0 0 0 0 0 1 1 0.5 0 0 0 0 0 0 1 1 1 0.5 0 0 0 0 0 MEJ = 1 1 1 1 0.5 0 0 0 0 . (15) 1 1 1 1 1 0.5 0 0 0 1 1 1 1 1 1 0.5 0 0 1 1 1 1 1 1 1 0.5 0 1 1 1 1 1 1 1 1 0.5 As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was employed to determine the values of preference (P1 ), which are presented in Table 3. The characteristic objects CO1 –CO9 presented in Table 3 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C1 and C2 . The highest value of preference P1 received CO9 with a triangular fuzzy number of criterion C1 valued 0.9 (C13 ) and with a triangular fuzzy number of criterion C2 valued 90 (C23 ). The lowest value of preference P1 fell to CO1 with a triangular fuzzy number of criterion C1 valued 0.7 (C11 ) and with a triangular fuzzy number of criterion C2 valued 70 (C21 ). With an increase in the value of the criterion C1 , the preference increases more significantly than with an increase in the value of the criterion C2 . It means that C1 has a greater impact on the assessment of the P1 model than C2 . P11 P12 1 0.8 0.6 µ 0.4 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P1 Figure 6. Visualization of triangular fuzzy numbers for Effectiveness per round assessment model (P1 ). Table 3. Overview of characteristic objects CO, vector P values for the Effectiveness per round assessment model. COi C1 C2 P1 CO1 0.7 70 0.0000 CO2 0.7 80 0.1250 CO3 0.7 90 0.2500 CO4 0.8 70 0.3750 CO5 0.8 80 0.5000 CO6 0.8 90 0.6250 CO7 0.9 70 0.7500 CO8 0.9 80 0.8750 CO9 0.9 90 1.0000
Appl. Sci. 2020, 10, 6768 14 of 35 Table 4. The performance table of the selected criteria C1 ,C2 and assessment model P1 . Pos. Name C1 C2 P1 1 s1mple 0.88 86.6 0.8825 2 ZywOo 0.83 85.3 0.6788 3 Jame 0.78 79.3 0.4163 4 Jamppi 0.83 83.1 0.6513 5 huNter 0.80 88.2 0.6025 6 vsm 0.80 86.6 0.5825 7 meyern 0.82 83.8 0.6225 8 Kaze 0.78 80.7 0.4338 9 Hatz 0.76 81.8 0.3725 10 Sico 0.76 78.4 0.3300 For a better demonstration of the relevance of the criteria to the P1 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C1 , C2 and reference ranking obtained by P1 assessment model for top 10 players is equal to 0.9273 and 0.2970. The correlation between the first one is strong, while, in the second one, it is weak. The visualization of the relation diagram of Average kills per round (C1 ) and P1 assessment model, as well as the relation diagram of Average damage per round (C2 ) and P1 assessment model, is presented in Figure 7. 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.76 0.78 0.8 0.82 0.84 0.86 0.88 78 80 82 84 86 88 90 Figure 7. The relation diagram of Average kills per round (C1 ) for assessment P1 (left side) and Average damage per round (C2 ) for assessment P1 (right side). 5.2. Frag Gaining Assessment Model The model verifies the probability of a player to get an elimination based on the number of kills he has obtained in official CS: GO matches and a specific factor, which shows that the player is superior. The expert identified two significant criteria for the Frag gaining assessment model. Total kills, which is the total number of frags delivered by the player, and K/D Ratio, that is the number of frags divided by the number of deaths. Both of them are profit type criteria, whereas it was mentioned earlier, with the increase in values, preference increases, too. Table 5 shows the values of the criteria C3 and C4 and the P2 assessment model. Based on the presented data, it can be determined that the best value of the criteria C3 was achieved by ‘ZywOo’, which is equal to 1.000, while the worst result was obtained by ‘BnTeT’ with the value equal to 0. In the case of the second criterion, the best score was given to ‘Jame’ with 1.51, and the lowest score was received by ‘Texta’ with a value equal to 1.15. Analyzing the results of the Frag gaining assessment model (P2 ), we can conclude that the highest score was obtained by ‘Jame’, and is equal to 0.8423. The triangular fuzzy numbers of criterium C3 are presented in Figure 8 and C4 in Figure 9.
Appl. Sci. 2020, 10, 6768 15 of 35 C31 C32 C33 1 0.8 0.6 µ 0.4 0.2 0 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 C3 Figure 8. Visualization of Total kills (C3 ) and triangular fuzzy numbers 0.0 (C31 ), 0.5 (x32 ), and 1.0 (x33 ). C41 C42 C43 1 0.8 0.6 µ 0.4 0.2 0 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 C4 Figure 9. Visualization of the number of kills divided by number of deaths (K/D) Ratio (C4 ) and triangular fuzzy numbers 1.00 (C41 ), 1.25 (C42 ), and 1.60 (C43 ). In the considered set of parameters, there were players with: total kills (C3 ) with the values of the support of the triangular fuzzy number from 0 (C31 ) to 1 (C33 ) and the core valued 0.5 (C32 ); K/D ratio (C4 ) with the values of the support of the triangular fuzzy number from 1 (C41 ) to 1.6 (C43 ) and the core valued 1.25 (C42 ). Based on the data presented in the Table 6, it turned out that the output P2 takes values from 0.2 to 0.9. Therefore, the variable P2 will take two values. Both of them will also be saved as triangular fuzzy numbers. They are displayed in Figure 10. The comparison of the 36 pairwise of the 9 characteristic objects was executed. Consequently, the Matrix of Expert Judgment (MEJ) was defined (16), where each αij value was calculated using Equation (11). 0.5 0 0 0 0 0 0 0 0 1 0.5 0 1 0 0 1 0 0 1 1 0.5 1 1 0 1 1 0 1 0 0 0.5 0 0 0 0 0 MEJ = 1 1 0 1 0.5 0 1 0 0 . (16) 1 1 1 1 1 0.5 1 1 0 1 0 0 1 0 0 0.5 0 0 1 1 0 1 1 0 1 0.5 0 1 1 1 1 1 1 1 1 0.5 As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was used to determine the values of preference (P2 ), which are presented in Table 5. The characteristic objects CO1 –CO9 presented in Table 5 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C3 and C4 . The highest value of preference P2 received CO9 with a triangular fuzzy number of criterion C3 valued 1 (C33 ) and with a triangular fuzzy number of criterion C4 valued 1.6 (C43 ). The lowest value of preference P2 fell to CO1 with a triangular fuzzy number of criterion C3 valued 0 (C31 ) and with a triangular fuzzy number of criterion C4 valued 1 (C41 ). With an increase in the value of the criterion C4 , the preference increases more significantly than with an increase in the value of the criterion C3 . It means that C4 has a greater impact on the assessment of the P2 model than C3 .
Appl. Sci. 2020, 10, 6768 16 of 35 P21 P22 1 0.8 0.6 µ 0.4 0.2 0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 P2 Figure 10. Visualization of triangular fuzzy numbers for Frag gaining assessment model (P2 ). Table 5. Overview of characteristic objects CO, vector P values for the Frag gaining assessment model. COi C3 C4 P2 CO1 0 1.0 0.0000 CO2 0 1.25 0.3750 CO3 0 1.6 0.7500 CO4 0.5 1.0 0.1250 CO5 0.5 1.25 0.5000 CO6 0.5 1.6 0.8750 CO7 1.0 1.0 0.2500 CO8 1.0 1.25 0.6250 CO9 1.0 1.6 1.0000 Table 6. The performance table of the selected criteria C3 , C4 and assessment model P2 . Pos. Name C3 C4 P2 1 s1mple 0.168 1.50 0.6849 2 ZywOo 1.000 1.40 0.7857 3 Jame 0.755 1.51 0.8423 4 Jamppi 0.507 1.30 0.5553 5 huNter 0.981 1.22 0.5753 6 vsm 0.343 1.22 0.4158 7 meyern 0.080 1.28 0.4271 8 Kaze 0.089 1.32 0.4723 9 Hatz 0.190 1.28 0.4546 10 Sico 0.137 1.36 0.5271 For a better demonstration of the relevance of the criteria to the P2 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C3 , C4 , and reference ranking obtained by P2 assessment model is equal to 0.5636 and 0.4910. The correlation between the first one and the reference ranking is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of total kills (C3 ) and P2 assessment model, as well as the relation diagram of K/D ratio (C4 ) and P1 assessment model, is shown in Figure 11.
Appl. Sci. 2020, 10, 6768 17 of 35 0.85 0.85 0.8 0.8 0.75 0.75 0.7 0.7 0.65 0.65 0.6 0.6 0.55 0.55 0.5 0.5 0.45 0.45 0.4 0.4 0 0.2 0.4 0.6 0.8 1 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 Figure 11. The relation diagram of Total kills (C3 ) for assessment P2 (left side) and K/D Ratio (C4 ) for assessment P2 (right side). 5.3. Failures per Round Assessment Model This model evaluates the weaker side of the player by showing how often he has a decline in form and skill deficiencies, which are vital to maintaining himself at the top of the global e-sport scene. The expert identified two crucial criteria for the Failures per round assessment model. Average assists per round, which is the average number of assists scored by the player during one round and Average deaths per round, that is the average number of deaths of a player pending one round. The first one is a profit type criterion, which means that the value increase indicates the preference increase; however, the second one is a cost-type criterion, which means the value increase indicates the preference decrease. Table 7 shows the values of the criteria C5 and C6 and the P3 assessment model. Based on the presented data, it can be determined that the best value of the criteria C5 was achieved by ‘INS’, which is equal to 1.18, while the worst result was obtained by ‘kNgV-’ with the value equal to 0.09. In the case of the second criterion, the best score was given to ‘Jame’ with 0.52, and the worst score was received by ‘roeJ’ with a value equal to 0.68. Analyzing the results of the Failures per round assessment model (P3 ), we can conclude that the highest score was obtained by ‘Jame’ and is equal to 0.7750. The triangular fuzzy numbers of criterium C5 are presented in Figure 12 and C6 in Figure 13. C51 C52 C53 1 0.8 0.6 µ 0.4 0.2 0 4 · 10−2 6 · 10−2 8 · 10−2 0.1 0.12 0.14 0.16 0.18 0.2 0.22 C5 Figure 12. Visualization of Assists per round (C5 ) and triangular fuzzy numbers 0.05 (C51 ), 0.10 (C52 ), and 0.20 (C53 ).
Appl. Sci. 2020, 10, 6768 18 of 35 C61 C62 C63 1 0.8 0.6 µ 0.4 0.2 0 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7 C6 Figure 13. Visualization of Average deaths per round (C6 ) and triangular fuzzy numbers 0.5 (C61 ), 0.6 (C62 ), and 0.7 (C63 ). In the considered set of parameters there were players with: Average assists per round (C5 ) with the values of the support of the triangular fuzzy number from 0.05 (C51 ) to 0.2 (C53 ) and the core valued 0.1 (C52 ); Average deaths per round (C6 ) with the values of the support of the triangular fuzzy number from 0.5 (C61 ) to 0.7 (C63 ) and the core valued 0.6 (C62 ). Based on the data presented in the Table 8, it turned out that the output P3 takes values from 0.2 to 0.8. Therefore, the variable P3 will take two values. Both of them will also be saved as triangular fuzzy numbers. They were displayed in Figure 14. The comparison of the 36 pairwise of the 9 characteristic objects were executed. Consequently, the Matrix of Expert Judgment (MEJ) was defined (17), where each αij value was calculated using Equation (11). 0.5 0 0 1 1 1 1 1 1 1 0.5 0 1 1 1 1 1 1 1 1 0.5 1 1 1 1 1 1 0 0 0 0.5 0 0 1 1 1 MEJ = 0 0 0 1 0.5 0 1 1 1 . (17) 0 0 0 1 1 0.5 1 1 1 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 1 0.5 0 0 0 0 0 0 1 1 0 0.5 As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was employed to determine the values of preference (P3 ), which are presented in Table 7. The characteristic objects CO1 –CO9 presented in Table 7 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C5 and C6 . The highest value of preference P3 received CO3 with a triangular fuzzy number of criterion C5 valued 0.2 (C53 ) and with a triangular fuzzy number of criterion C6 valued 0.5 (C63 ). The lowest value of preference P3 fell to CO7 with a triangular fuzzy number of criterion C5 valued 0.05 (C51 ) and with a triangular fuzzy number of criterion C6 valued 0.7 (C61 ). With a decrease in the value of the criterion C6 , the preference increases more significantly than with an increase in the value of the criterion C5 . It means that C6 has a greater impact on the assessment of the P3 model than C5 .
Appl. Sci. 2020, 10, 6768 19 of 35 P31 P32 1 0.8 0.6 µ 0.4 0.2 0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 P3 Figure 14. Visualization of triangular fuzzy numbers for Failures per round assessment model (P3 ). Table 7. Overview of characteristic objects CO, vector P values for the Failures per round assessment model. COi C5 C6 P3 CO1 0.05 0.5 0.7500 CO2 0.1 0.5 0.8750 CO3 0.2 0.5 1.0000 CO4 0.05 0.6 0.3750 CO5 0.1 0.6 0.5000 CO6 0.2 0.6 0.6250 CO7 0.05 0.7 0.0000 CO8 0.1 0.7 0.1250 CO9 0.2 0.7 0.2500 Table 8. The performance table of the selected criteria C5 ,C6 and assessment model P3 . Pos. Name C5 C6 P3 1 s1mple 0.09 0.59 0.5125 2 ZywOo 0.12 0.59 0.5625 3 Jame 0.09 0.52 0.7750 4 Jamppi 0.10 0.64 0.3500 5 huNter 0.15 0.66 0.3375 6 vsm 0.13 0.65 0.3500 7 meyern 0.12 0.64 0.3750 8 Kaze 0.10 0.60 0.5000 9 Hatz 0.15 0.60 0.5625 10 Sico 0.13 0.56 0.6875 For a better demonstration of the relevance of the criteria to the P3 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C5 , C6 , and reference ranking obtained by P3 assessment model is equal to 0.5273 and 0.1636. The correlation between the first one and the reference ranking is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of Average assists per round (C5 ) and P3 assessment model, as well as the relation diagram of Average deaths per round (C6 ) and P3 assessment model, is shown in Figure 15.
Appl. Sci. 2020, 10, 6768 20 of 35 0.8 0.8 0.75 0.75 0.7 0.7 0.65 0.65 0.6 0.6 0.55 0.55 0.5 0.5 0.45 0.45 0.4 0.4 0.35 0.35 0.3 0.3 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 Figure 15. The relation diagram of Average assists per round (C5 ) for assessment P3 (left side) and Average deaths per round (C6 ) for assessment P3 (right side). 5.4. Final Model CS: GO Players assessment model finally determines the uniqueness of the Counter-Strike Global: Offensive player by placing him in the final ranking, based on previous partial assessments. The final model for the players’ assessment has three aggregated input variables. The output variable from the Effectiveness per round assessment, Frag gaining assessment, and the output variable from the Failures per round assessment were applied. The aggregated variables P1 and P2 are both profit type, whereas the P3 is cost type. The triangular fuzzy numbers of parameter P1 is presented in Figure 16, P2 in Figure 17, and P3 in Figure 18. P11 P12 1 0.8 0.6 µ 0.4 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P1 Figure 16. Visualization of Effectiveness per round assessment model (P1 ) and triangular fuzzy numbers 0.1 (P11 ) and 0.9 (P12 ). P21 P22 1 0.8 0.6 µ 0.4 0.2 0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 P2 Figure 17. Visualization of Frag gaining assessment model (P2 ) and triangular fuzzy numbers 0.2 (P21 ) and 0.9 (P22 ).
You can also read