I Introduction
Manyobjective optimization problems (MaOPs) concern solving conflicting objectives simultaneously where is greater than three [1]. Generally, an MaOP has the following formulation described by Equation (1)
(1) 
where is the decision space, is the objective space. Without loss of generality, it is assumed that is a minimization problem in which are to be minimized. Because of MaOPs widely existing in many realworld applications, such as management in land exploitation with objective [2], calibration problems of automotive engine with objective [3], to name a few, there is a strong incentive for efficiently and effectively solving MaOPs.
In MaOPs, there is no single perfect solution that optimizes all of the objectives at the same time but a set of Paretooptimal solutions in which each individual is nondominated with respect to each other. In addition, all the Paretooptimal solutions constitute the Paretooptimal set (PS) in the decision space while the image of PS produces a Paretooptimal front (PF) in the objective space. Commonly, the goal in solving MaOPs is to obtain a limit number of Paretooptimal solutions, which are uniformly distributed in PF, where a decisionmaker can delegate a solution based on his or her preference. Among all the approaches for handing MaOPs, evolutionary algorithms are considered preferable because of the searching power exerted in these populationbased metaheuristics. During the past several decades, numerous multiobjective evolutionary algorithms (MOEAs), such as elitist nondominated sorting genetic algorithm (NSGAII)
[4], advanced version of strength Pareto evolutionary algorithm (SPEA2) [5], have been developed for dealing with multiobjective optimization problems (MOPs) in which at most three objectives are to be optimized simultaneously. However, their performance degraded drastically in addressing MaOPs [6]. The main reason is the loss of selection pressure which is caused by the dominance resistance (DR) [7]and the curse of dimensionality
[8] phenomena. To be specific, DR refers to a large proportion of solutions in which individuals are the best in one or very few objectives but far worse in others, and these solutions cannot be discriminated by the original Pareto domination principle. Then densitybased secondary measurement is activated to decide which solutions are allowed to survive in the next generation [9]. Because of the behavior influenced by DR pointed out in [10], the selected solutions do not necessarily converge to the PF [8]. To this end, various manyobjective evolutionary algorithms (MaOEAs) for tackling MaOPs have been developed^{1}^{1}1The algorithms which was designed originally for MOPs while is extended for MaOPs are also categorized to MaOEAs in this paper., such as multiobjective evolutionary algorithm based on decomposition (MOEA/D) [11], hypervolumebased manyobjective optimization algorithm (HypE) [12], gridbased evolutionary algorithm for manyobjective optimization (GrEA) [13], manyobjective optimization algorithm using reference point based nondominated sorting (NSGAIII) [14], and etc.[9, 15, 16, 17]^{2}^{2}2Typically, these MaOEAs can be classified into three basic categories: Dominancebased, Decompositionbased and Hypervolumebased. MOEA/D and HypE are from the second and third categories, respectively. NSGAIII and GrEA are the hybridization of the first and the second categories.
. More precisely, MOEA/D employs the decompositionbased approach to construct a set of single objective problems by aggregating objectives considered in the original MaOP with different predefined weight vectors. New solutions are generated within a subregion and diversity is maintained by the uniformly distributed weight vectors. The promising solutions are selected in HypE based on their fitness which is assigned by the corresponding contribution in hypervolume measure. As the computation of exact hypervolume is prohibitive, Monte Carlo simulation is employed to address this limitation. GrEA utilizes gridbased approach to show better performance in solving MaOPs by introducing the gridbased fitness comparison to relax the Paretobased dominance relationship and gridmetrics to improve the diversity. Compared to NSGAII, the improvement of NSGAIII is in the diversity mechanism by assigning the solutions to a set of uniformly distributed reference vectors. In summary, these stateoftheart MaOEAs mentioned above mainly contemplate on two distinct issues: 1) reform the comparison manner in the traditional dominance relationship, such as HypE and GrEA and 2) apply new designs to reinforce the diversity, such as MOEA/D and NSGAIII.It is highly expected that individuals generated by selected parents with the crossover and mutation operators would march towards the PF in a MOP. However, this will not be the case in MaOPs due to the DR phenomenon. Specifically, if the parents are neighbors of DR solutions their offspring are also DR solutions. Otherwise, the newly generated solutions are not necessarily better than their parents who stand at a large space of manyobjectives with a remote distance. This can be seen as the inefficiency of existing genetic operators for MaOPs [8]. Moreover, Deb et al. in [18] concluded that the performances of MOEAs are significantly influenced by the genetic operators which cannot ensure to generate promising offspring. Furthermore, the parameters in genetic operators need empirically configured. For example, the distribution index of SBX in NSGAIII needs to be set at a larger number. For this purpose, researchers have developed estimation of distribution algorithms (EDAs) to tackle optimization problems [19, 20, 21] by generating new solutions without involving the traditional genetic operators, but probabilistic models which are built based on the statistics of the visited solutions. A general framework of EDAs is illustrated in Algorithm 1. Typically, the EDAsbased MOEAs are broadly classified into two categories based on their estimation models.
The first category covers the Bayesian networkbased EDAs. For example, multiobjective Bayesian optimization algorithm
[22] utilized the Bayesian optimization algorithm (BOA) to build a Bayesian network as its model for generating offspring. A related work was investigated in [23] to predict the model by strengthening Pareto ranking approach [24] and BOA. Furthermore, Laumanns et al. in [25] proposed a Bayesian multiobjective optimization algorithm whose model is built over the solutions selected by Pareto ranking method [26]. In addition, an improved nondominated sorting approach was employed by decision treebased multiobjective EDA
[27] to select a subset of solutions serving for a regression decision tree to learn the model. Recently, the multidimensional Bayesian network (MBNEDA) was proposed in [21] specifically for addressing MaOPs.The other category is often known as the mixture probability modelbased EDAs. Examples include the multiobjective mixturebased iterated density estimation evolutionary algorithm
[28]employing the mixed probability distributions to sample welldistributed solutions; and the multiobjective Parzenbased EDA
[29] learning from the Gaussian and Cauchy kernels to build its models. In [30], the multiobjective hierarchical Bayesian optimization algorithm was designed by the mixture Bayesian networkbased probabilistic model for discrete multiobjective optimization problems. In addition, the multiobjective extended compact genetic algorithm [31] took a marginal product model as the mixture of probability model. Furthermore, a regularitybased model EDA (RMMEDA) was proposed in [20]in which the model is built based on the mixture normal distribution over the regularity. Zhou
et al. [32] proposed a regularitybased method for solving the MOPs requiring the objective spaces to be dimensions.It is believed that EDAs are capable of solving MaOPs without suffering the disadvantages of MOEAs with traditional genetic operators. Although, MBNEDA has shown the promise in solving MaOPs, the development of manyobjective optimization EDAs (MaOEDA) is still in infancy. Especially, probability models based on regularity have been extensively investigated in the discipline of statistical learning [33, 34], and regularitybased model are easier to build, and fairly effective. Based on our recent research achievements on this topic [35, 36] and motivated by the success of regularitybased EDAs for MOPs [37, 38, 20], an improved regularitybased EDA for MaOPs, in short named MaOEDAIR, is proposed in this paper. To be specific, models employed to generate new solutions in the proposed algorithm are built based on a group of neighbors selected by a uniformly distributed reference vectors. In order to improve the selection pressure, diversity repairing mechanism is developed to prevent the adverse DR phenomenon in each generation and push the new solutions toward having a closer proximity to the PF. Furthermore, dimension reduction technique is employed priori to the evolution to reduce the cost for exploration search. Specifically, convergence in the proposed algorithm is guaranteed by repairing diversity and sampling solutions based on the reference vectors, while diversity is facilitated by selecting solutions with the nearest perpendicular distances to the reference vectors. Compared to traditional MaOEAs and EDAs, the contributions of the proposed algorithm are summarized as follows:

Extend the uses of regularity modelbased EDAs to MaOPs. In addition, reference vectorsbased diversity mechanism is incorporated into the proposed algorithm to enhance the selection pressure.

Large search space poses a challenge for regularity modelbased EDAs as do to all MaOEAs. To this end, dimension reduction technique is utilized in the decision space to speed up the exploration search for sampling promising solutions.

Convergence and diversity are considered equally important in the design of a quality MOEA or MaOEA. In the proposed algorithm, convergence is mainly treated in the first stage (the phase of dimension reduction), while diversity is focused in the second stage.
The remainder of this paper is organized as follows. Related reference vectorsbased MaOEA, evolutionary algorithms based on dimension reduction, and the seminal work of regularitybased EDA are reviewed in Section II, respectively. In Section III, the framework of the proposed algorithm is outlined, and the respective steps are detailed. In addition, the complexity of the proposed algorithm are analyzed, and two crucial subcomponents of the proposed algorithm as well as the principles for selecting neighbor solutions for building the model are discussed. In Section IV, a series of experiments are performed over widely used test suites against chosen peer competitors and the results measured by the selected performance metrics are statistically compared, in addition to experiments on investigating the effect of neighbor size, diversity repairing, and dimension reduction. Finally, conclusion and future work are given in Section V.
Ii Related Works
Because the proposed algorithm are mainly concerned on the reference vectors, dimension reduction, and regularitybased EDA model, works related to these aspects are discussed. To be specific, stateoftheart MaOEA, NSGAIII which is based on the reference vectors, is introduced first. Then the evolutionary algorithms employing dimension reduction are reviewed. Next, RMMEDA which is a regularitybased EDA is analyzed. Finally, the disadvantages of RMMEDA for solving MaOPs, and the differences to the proposed algorithm are highlighted.
Iia NsgaIii
The major difference of NSGAIII compared to its predecessor is the diversity improving mechanism which is performed by the reference vectors, and it begins to take effect when solutions need to be chosen from the nondominated sorting front where , , and is a cardinality operator. To be specific, each reference vector is assigned to solutions in where by calculating which solution has the nearest perpendicular distance to it. Then the reference vector who is assigned the smallest number of solutions is marked and the solution in that has the smallest perpendicular distance to is selected. Next, solution is removed from and the assigned number of is increased by . These steps are iterated until solutions are selected from . Because these reference vectors are uniformly distributed, the selected solutions are hopefully evenly distributed in the objective space. Specifically, the reference vectors are uniformly generated in the space, and the sum of elements in one reference vector is equal to
. However, the problem to be optimized is not necessary in this unit hyperplane. For this purpose, all the objectives are to be normalized priori to calculating the perpendicular distances. This mechanism of uniformly distributed reference vector assisting to select solutions is more and more preferred by MaOEAs due to its explicitly diversity preserving nature.
IiB Evolutionary Algorithms Based on Dimension Reduction
The notion that stateoftheart MOEAs are capable of solving the MOPs naturally leads to an intent of reducing the number of objective in a MaOP (i.e., dimension reduction in the objective space), and then applying these powerful MOEAs in solving them. Specifically, dimension reduction in the objective space refers to removing the redundant objectives, while the same solutions are obtained as with all objectives involved [39]. With the utilization of dimension reduction, the computational complexity is reduced due to a smaller number of objectives. In summary, the dimension reduction schemes considered in literatures can be sorted into two categories. The first category is often known as the correlationbased methods, such as the works in [40, 41, 42]
. In addition, the correntropy principal component analysis (CPCA), maximum variance unfolding PCA (MVUPCA)
[43], and PCAbased algorithm [44] were proposed to analyze the correlation between the solutions generated in each generation, while Saxena et al. [45] proposed the linear PCA and nonlinear MVC over a set of Paretooptimal solutions to check the correlation. In addition, Guo et al. [46] employed the interdependence factors to identify the correlation for dimension reduction. Recently, Wang and Yao proposed a novel approach to reduce the objective dimension by measuring the linear and nonlinear correlation between objectives using nonlinear correlation information entropy [47]. The second category covers the algorithms which employ the dimension reduction based on dominance structure, such as the work [41] in which the dominance was employed to identify the redundant objectives. In addition, the Pareto Corner Search Evolutionary Algorithm [48] utilized the corner sorting technique to find the corner solutions in which the dimension reduction was performed. In summary, the dimension reduction techniques in these algorithms are performed in the objective space, while the proposed improved regularitybased MaOEDA builds its model in the decision space. Moreover, it is common that the number of decision variables is much greater than that of the objectives. As a consequence, it is reasonable to reduce the dimension of the decision space for the purpose of computational efficiency and effectiveness.IiC RmMeda
The probability model of RMMEDA is built based on the regularity of decision space. To be specific, all the solutions are clustered into multiple disjoint groups by local PCA [49] first, and then models are constructed to generate new solutions for each group. Specifically, local PCA is employed for manifold dimension reduction by performing multiple PCA operations in each piecewise linear segment over the entire given data. Compared to PCA, local PCA is considered better to collectively capture the global structure. For intuitively comparing the effects of local PCA and PCA, an example is plotted in Fig. 1 in which it is clearly shown that the local PCA works better in estimating the entire structure of the data. Supposed that
are the eigenvalues of the covariance matrix of one group,
, and the corresponding eigenvectors are
. Then the model is formulated with the assumption that the PS is a dimensional piecewise manifold in a continuous problem. In further, solutions are sampled from this model with the number which is proportional to the volume of the model to that of all the models. Combined with nondominated sorting, a set of solutions with diversity approximating the PF are generated.EDAs generate new solutions with probabilistic model to reject the detrimental consequence lead by genetic operator. Especially, regularitybased model is much preferable than most Bayesian networkbased models in EDAs because of the simplicity yet effectiveness [50]. For example, EDAs based on Bayesian network need the procedure of training while regularitybased methods are analytic. However, because RMMEDA is originally developed for solving MOPs with variables linkage [20], it may not be suitable for solving MaOPs. For example, the diversity in RMMEDA is maintained by uniformly sampling new solutions in the decision space and this cannot give rise to the corresponding diversity in the objective space especially in MaOPs such as all the test problems in DTLZ [51]. Furthermore, local PCA makes sense when the PS is in full rank in the decision space, which is not necessary the case in MaOPs. To this end, an improved regularitybased EDA for effectively addressing MaOPs is proposed in this paper. In addition, a new diversity facilitating mechanism which employs the uniformly distributed reference vectors is also incorporated to improve the selection pressure. Furthermore, the dimension reduction technique based on the correlation scheme is utilized in the decision space over a set of Paretooptimal solutions to save the computational complexity. Compared to RMMEDA, the main contributions of the proposed algorithm are listed as follows.

The diversity improving mechanism of the proposed algorithm is compatible to most problems while RMMEDA is only suitable to the problems in which the PS has the same image of PF.

The PS in RMMEDA is not allowed to lie on any subspaces of the decision space, while the proposed algorithm has no such a requirement of it.

RMMEDA is implemented with the manifold assumption that the PS is a piecewise manifold with dimension. In this paper, the proposed algorithm is developed without such an assumption.
Iii Proposed Algorithm
In this section, the framework of the proposed algorithm, i.e., improved regularity modelbased EDA for manyobjective optimization (MaOEDAIR), is given first. Then the details of each step in the framework are presented. This is followed by the analysis of the computational complexity. Finally, significant subcomponents of the proposed algorithm, and the principles of selecting neighbor solutions are discussed, respectively. It is noted here that the proposed algorithm is given in the context of problem described by Equation (1).
Iiia Framework of the Proposed Algorithm
The proposed algorithm begins with reducing the dimension of the decision space (Subsection IIIB). Then new promising solutions are generated in the reduced decision space to speed up the evolutionary progress and lower the computational cost of exploitation search in the proposed algorithm, and their fitness are evaluated (Subsection IIIC). Next, a set of generated reference vectors in (it refers to the subpart of , where all points in this subpart are with element values no less than 0) is mapped (Subsection IIID), and regularitybased model is built (Subsection IIIE) for repairing the diversity of the proposed algorithm (Subsection IIIF) in which a set of solutions are generated. Based on , new offspring are generated (Subsection IIIG). With the help of environmental selection operator (Subsection IIIH), a set of solutions with a better quality in convergence and diversity are obtained. Repairing the diversity, model updating, sampling new solutions, and environmental selection are performed one by one in a limit number of generations. At last, final selection is utilized to choose the representative solutions for the available slots (Subsection IIIH). In addition, maximum generation number, a set of uniformly distributed reference vectors, neighbor size, and respective threshold for dimension reduction and model building need to be made available prior to the proposed algorithm running. In summary, the framework of the proposed algorithm is listed in Algorithm 2.
IiiB Reducing the Dimension of Decision Space
Dimension reduction technique is used to reduce the volume of exploration space to speed up the search of sampling new solutions. Ideally, the subspace of PS is desirable. To this end, a set of Paretooptimal solutions is suitable to be the training data and then exploitation is performed in the subspace. Noted here that, the training data only require the solutions with convergence and the diversity is not necessary. Consequently, algorithms such as the conventional weighted aggregation method [52] for problems with convex PF, and the evolutionary dynamic weighted aggregation methods [53] for problems with nonconvex PF are suitable for this. In this paper, the Pareto Corner Search Evolutionary Algorithm (PCSEA) [48] is employed because 1) a set of corner Paretooptimal solutions which can be used as the training data as well as the extreme points (extreme points are employed for building the regularitybased model in Subsection IIIE) are obtained simultaneously when the PCSEA completes, 2) the source code is available, and 3) the computational cost is promising compared to the algorithms selected for generating the extreme points and the training data. Moreover, the PCSEA has been successfully employed to generate solutions for dimension reduction in the objective space in its seminal paper. In each generation of PCSEA, lists are increasingly ordered, where is the number of objectives. Specifically, the first lists are about the objectives, while the last lists are with the exclusive norm square. Especially, the exclusive norm square of the th objective is with the form . By assigning the solutions in the top of the lists with a smaller rank value, the corner solutions are highlighted (more details can be found in [48]). When the evolution process of PCSEA is completed, the Paretooptimal solutions, which are denoted by , are selected from the population.
For convenience of the development, a matrix is used to represent . Specifically, each row in denotes one solution while the columns refer to the different dimension of decision variables. Then the process of dimension reduction is illustrated in Algorithm 3 in which it can be divided into two parts. The first part is to employ the principal component analysis (PCA) to find the projected space in which keeps its information where is a predefined threshold (lines 33). After the transformed data is projected back to its original space (in line 3), the values at reduced dimensions become . Thereafter, the index of the reduced dimensions are selected, which are implemented in lines 33. Then, is saved together with the mean value of for evaluating the fitness of solutions generated in the reduced decision variable space.
In particular, the main reason for not using but the original space is for reducing the computational complexity. Specifically, is not the original decision space, and the solutions sampled from cannot be directly used for fitness evaluation. There are two ways to solve this problem. The first one is to sample solutions from and then transform solutions into the original space when they are evaluated for the fitness. The other one is to transform to the original space in advance, and then sample solutions from the original space. If we use the first method, the transformation operation needs to be performed for each solution in each generation. However, if we use the second method, we only need to do this transformation once. Obviously, the second method is with less computational complexity. Next, we will explain the mechanism of lines 33 in Algorithm 3.
By using PCA, the generated principal space is with less number of dimensions compared to that of the original space. When is transformed to the original space, the values of the dimensions reduced by PCA are all zeros. Therefore, we use lines 33 in Algorithm 3 to find these dimensions by checking where their values are zeros. Once we find these dimensions, we remove them and store their indexes. In Algorithm 3 we use to denote the space without the reduced dimensions, and sample solutions from
. When these solutions are used for fitness evaluation, just padding their corresponding mean value (stored in
) to the corresponding dimension based on the stored index, and using them to do fitness evaluation.In most PCAbased methods, none of solutions sampled from the reduced space needs to be operated in the original space. However, in the proposed design, the solutions must be transformed back to the original space for fitness evaluation. Naturally, the dimension reduction technique used here is quite different from a general PCAbased method.
IiiC Creating Population and Evaluating Fitness
Assuming the dimension of the reduced decision variable space is (obviously ). Based on the design principles of the proposed algorithm, each final solution of the proposed algorithm is desirable to be associated with one reference vector, and the model is built on the neighbor solutions of one particular reference vector. Therefore, the population is randomly initialized in with size . In order to evaluate the fitness, the created population needs to be translated back to , which is demonstrated by Algorithm 4. Specifically, the translated population has the same number to that of the initialized population , and each individual in is with dimension. Furthermore, the values in reduced dimension are equal to that of the elements in the mean value vector . To this end, is initialized in (line 4) with zeros first. Then, the element values in each row of is set to be as (line 4). Finally, each column of is added to the corresponding column of , which is implemented by lines 44. Once the translation is performed, the fitness is evaluated by introducing the population to the problem to be optimized.
IiiD Mapping Reference Vectors
Conventionally, Das and Dennis’s method [54] is employed to generate the uniformly distributed reference vectors which is constructed in . However, the PF of the problem to be optimized does not necessary span the entire space. In order to keep the same number of reference vectors in the PF, needs to be mapped. For illustrating this motivation, an example in the dimensional space is plotted in Fig. 2 in which nine blue lines denote the reference vectors and line denotes the PF. Specifically, Fig. (a)a describes the reference vectors generated by Das and Dennis’s method and only four reference vectors intersect , while Fig. (b)b depicts the nine reference vectors intersect after they are mapped. Because the final solutions in the proposed algorithm are selected by their corresponding reference vector, it is obvious that the design in Fig. (b)b is preferred. In addition, Algorithm 5 presents the details of mapping the reference vectors which can be divided into four steps. First, the objective values of the training data for dimension reduction in Algorithm 3 and the extreme points (denoted by ) are calculated (lines 55). Second, the ideal point is derived by selecting the minimum values in all objectives, which are implemented in lines 55. Noted here that, this approach to calculate the ideal point is also utilized in [55, 56, 57, 58, 59]. Third, the extreme points are updated by subtracting the ideal point and the intercepts are calculated (lines 55) by solving the equation^{3}^{3}3Repetitive points existing in will lead multiple solutions to this equation. To avoid this, the trick introduced in [60] is employed. where is an identity vector. Fourth, the reference vectors uniformly generated in are mapped by lines 55 as mapped reference vectors . Noting that, the last two steps are necessary. Otherwise, only the origin of the coordinate system has been moved to the idea point, and there is still no sufficient numbers of reference vectors intersecting with line (see Fig. (c)c).
IiiE Building the Regularitybased Model
Generally, the regularitybased model is composed of multiple submodels due to the complexity of regularity on which a unified model is difficult to exactly capture the global intrinsic relation [20, 61]. In this proposed algorithm, each submodel is built based on one reference vector with its neighbor solutions and Algorithm 6 presents the details.
To be specific, given the neighbor solutions of the th reference vector, the threshold , and the enlargement factor . The steps of building the th submodel are illustrated as follows. First, is represented by the matrix and centered (lines 66). Then the eigenvalues as well as the eigenvectors of are obtained and ordered based on descending the eigenvalues (line 6). Lines 66 demonstrates the search of principal components on which the centered data is projected (line 6). Next, the projected space is constrained by find the minimum and the maximum values of projection (line 6), and the latent space for generating new offspring is obtained with the enlargement factor and the principle components (line 6). Finally, the noise for the latent space is computed from the mean values of the eigenvalues regarding the nonprinciple components (line 6), and the regularitybased model is obtained.
IiiF Repairing the Diversity
Diversity repairing is employed by sampling new solutions to mitigate the adverse of the phenomenon that reference vectors lack associated solutions. To this end, all the nondominated individuals in the current population are enumerated to find their respective nearest reference vectors first, which is implemented by lines 77. Then lines 77 demonstrate the selection of neighbor solutions. In addition, the model building and new solution sampling are presented in line 7 and lines 77, respectively. Noted in the phase of selecting neighbor solutions for reference vector , its neighbor solutions are from the current population and the nondominated solutions in , the motivation of which is discussed in Subsection IIIJ. In addition, it is obvious that the size of the neighbor solutions is not necessary with . I.e., it is with when the selected nondominated neighbor solution has been included in line 7. The reasons that the neighbor solution size are not strictly kept with are that 1) it does not affect the built model and 2) removing the extra solution gives more computational cost.
IiiG Generating Offspring
After repairing the diversity, the solution set is generated. Then, nondominated solutions in are selected from and the current population (line 2 of Algorithm 2). Next, reference vectors are mapped (line 2 of Algorithm 2), which is performed by Algorithm 5 with the input parameters and . Finally, offspring are generated by Algorithm 8. In summary, generating offspring can be viewed as the diversity repairing in each reference vector, which could be investigated through the analogy between lines 88 of Algorithm 8 and lines 77 of Algorithm 7. However, the motivation of generating offspring is different to that of diversity repairing, which will be discussed in Subsection IIIJ. In addition, updating the reference vectors is motivated by achieving a better performance of the proposed algorithm, although it has been reported that PCSEA is capable of finding the approximated corner solutions from which the extreme points are derived [48].
IiiH Environmental Selection and Final Selection
After offspring are generated, nondominated solutions are selected from the current population, i.e., (line 2 of Algorithm 2). Then, reference vectors are updated (line 2 of Algorithm 2) by performing Algorithm 5 with the input parameters and itself. Next, the environmental selection is performed. Specifically, the environmental selection aims at removing illfit solutions from the current population and maintaining a limit size of representative individuals to reduce the cost of computation in each generation. The final selection is to choose the bestfit solutions. Moreover, both selections are dependent in the proposed algorithm. In the framework of the proposed algorithm, it is assumed that solutions are needed by the decisionmaker when the algorithm is finished. Because the proposed algorithm is based on the statistics of regularity of the neighbors regarding each solution in . As a consequence, the number of solutions for building the model should be . In addition, extra solutions are incorporated from the phases of diversity repairing and offspring generating in each generation. To this end, the purpose of environmental selection is for maintaining a size of population with the same number of initialized population while the final selection is for selecting solutions. It is hopeful that each reference point has neighbor solutions after the environmental selection which is described by Algorithm 9. Specifically, final selection is implemented if the number is replaced by in the environmental selection.
In summary, the environmental selection covers two steps. The first step is to assign the reference vectors by selecting the nondominated solutions which have smallest perpendicular distances to them (lines 99 of Algorithm 9). The other is to set solutions for each reference vector. Specifically, the nondominated sorting and truncated selection is employed when more than solutions are assigned to one reference vector (lines 99 of Algorithm 9), otherwise necessary solutions are selected from the current population based on the smallest perpendicular distances (lines 99 of Algorithm 9). Noted that, selected solutions are removed from the current population to avoid repetitions being reselected (lines 9 and 9 of Algorithm 9).
IiiI Computational Complexity
Complexity of the proposed algorithm is analyzed with the context of problem defined by Equation (1). For convenience, the number of solutions for PCSEA to generate the training data for dimension reduction is set to be same to that in the final selection of the proposed algorithm. Consequently, the computational times of PCSEA is . For the dimension reduction, the most cost is for computing the eigenvalues and eigenvectors which has computations. In summary, the computational times of dimension reduction is . I addition, the creating population and fitness evaluation need and computation times, respectively. The computational complexity of mapping reference vectors is . Moreover, the complexity of building the model is mainly contributed by the matrix factorization whose complexity is . Briefly, lines 22 in Algorithm 2 takes computational times. Furthermore, the model building, nondominated selection from , nondominated selection from dominate the computational complexity of the diversity repairing, generating offspring, and environmental selection. As a consequence, the computational complexity of lines 22 is or which is greater. In addition, the final selection takes computational times. Generally, the neighbor size is generally set to be a number with order of magnitude , and the maximum generation is set to that of magnitude . In summary, the computational complexity of the proposed algorithm is , where is the neighbor size, and is the number of solutions the decisionmakers require.
IiiJ Discussion
In the proposed algorithm, two subcomponents, dimension reduction, and diversity repairing are welldesigned to guarantee the performance. In this section, their design motivations are discussed first, and then the experimental verification are presented in Sections IVF and IVG.
First, redundancy exists in the high dimensional data, and data from a small part of the dimensions is sufficient to represent these high dimensional data, such as in the feature selection discipline. In order to obtain these low dimensional data, dimension reduction technique is utilized. Employing these low dimensional data against the original high dimensional data can significantly benefit the utilization of data, such as lowering the computational cost, improving the precision by removing the interference from the elements in the redundant dimensions, and so on. Furthermore, a series of literatures
[40, 42, 44, 43, 45, 46, 41, 48] have been proposed to reduce the objective number in MaOPs, thereafter stateoftheart MOEAs can be utilized to solve them. Generally, the numbers of decision variables are greater than that of objectives in MaOPs, such as the dimensional DTLZ7 [51] with decision variables, and as well in MOPs, for example, 2dimensional ZDT1 problem [62] with decision variables. Actually, it is no surprise that problems are with the number of decision variables greater than that of objectives because it is difficult to determine which particular factors affect the response, and a better way for modeling is to select all the observed factors, which is always with a larger size. As a result, dimension reduction in the decision space is appropriate and well justified, specifically for the proposed algorithm which is based on the regularity of the decision space. Moreover, the Paretooptimal solutions can be viewed as the features of the decision space in solving MaOPs, and if we obtain the subspace of the PS, subsequent operations can be constrained in this subspace to reduce the cost of exploration. Basically, this subspace is obtained by reducing the dimension on the Paretooptimal solutions. Ideally, only the diversity is concerned if the exact subspace of the PS is obtained, which is not true in implementation. To this end, extra components are incorporated in the proposed algorithm for improving the convergence, such as the reference vector updating, and nondominated selection for building model, diversity repairing, generating offspring, environmental selection, and final selection.To intuitively understand the diversity repairing mechanism in the proposed algorithm, an example of objective DTLZ1 problem [51] is illustrated in Fig. 3. To be specific, twelve different markers in red are shown in Fig. (a)a) identifying a set of uniformly distributed reference vectors in the objective space. The corresponding solutions (given the same markers) in the decision space with the reduced dimensions are shown in Fig. (b)b) which are not necessarily uniformly distributed. For each given solution in the decision space, nine different neighbors are chosen and displayed in the same marker as shown in Fig. (c)c). However, all the reference vectors are not necessarily assigned by the solutions in one generation due to the heuristic nature of evolution. This can be seen from Fig. (d)d) in which blue markers denote all the solutions in this generation and the area constrained by the ellipse circle implies that there is no solution for assigning to the corresponding reference vector. To this end, diversity repairing mechanism is activated by using the neighbor solutions. The “cross” markers in Fig. (e)e) are the solutions selected for generating new solutions. These solutions are then used for diversity repairing and generate solutions shown in “hollow circle” markers in Fig. (f)f). Intuitively, diversity repairing is employed for improving the diversity by assigning the corresponding solution. In fact, the convergence is strengthened in the phase of diversity repairing by generating new solutions from which dominated solution is assigned to the corresponding reference vector which is short of diversity previously. In summary, both convergence and diversity are promoted by diversity repairing mechanism.
Normally, the neighbor solutions for one particular reference vector consist of one nondominated solution and other nearest solutions to this reference vector from the current population. With this neighbor solution assignment, it is hopeful that solutions with convergence and diversity are sampled from the built model. Moreover, Fig. 4 highlights our motivation on this design. Especially, the blue line denotes the reference vector, black circles, and denotes the current population, rectangle area denotes the built model, and red circles denoted the sampled offspring. Fig. (a)a plots the solutions which have the nearest perpendicular distances to the reference vectors, while Fig. (b)b depicts the nondominated solution included into the neighbor solutions. It is obviously that solutions with good diversity and convergence are generated in Fig. (b)b.
Iv Experiments
To demonstrate the quality of the proposed algorithm, a series of experiments are welldesigned and performed on test problems, which are from two benchmark test suits, DTLZ [51] and DTLZ [63], with , , , , and objective. Since the proposed MaOEDAIR is an EDAbased algorithm for solving MaOPs, stateoftheart algorithms covering two categories 1) traditional MaOEAs (NSGAIII [14], MOEA/D [11], GrEA [13], and HypE [12]) and 2) EDAbased evolutionary algorithm (MBNEDA [21], and RMMEDA [20]) are considered as the peer competitors to compare the performance against the proposed MaOEDAIR.
In the following subsections, the selected benchmark test problems are introduced first. Then, the performance indicators chosen to measure the results generated by these compared algorithms are documented. Next, the parameter settings utilized by compared algorithms are declared. Finally, experiments on compared algorithms are performed and their results measured by the selected performance indicators are analyzed. In addition, empirical experiments on investigating the diversity repairing, dimension reduction, and neighbor size are performed to highlight their superiority and promote efficacy in addressing realword problems.
Iva Benchmark Test Problems
DTLZ1DTLZ4 problems which are from the scalable benchmark test suit DTLZ are considered as the test instances in these experiments. Specifically, each objective test problem is with decision variables where is specified as for DTLZ1 and for DTLZ2DTLZ4. Furthermore, the Paretooptimal solutions of DTLZ1DTLZ4 in the normalized dimensional objective space have the form formulated by Equation 2.
(2) 
where for DTLZ1 and for DTLZ2DTLZ4. Because the employed reference vectors generated by the systematical Das and Dennis’s method in the proposed algorithm are with the form that is similar to Equation 2, DTLZ test problems are considered less challengeable. To this end, the DTLZ1DTLZ4 problems from the DTLZ test suit, which is a variant of the DTLZ by multiplying the negative sign to each test problem in DTLZ, are included into the considered benchmark test problems for their more complicated PF shapes which especially challenges algorithms based on reference vectors.
IvB Performance Metrics
Two widely used performance metrics, Inverted Generational Distance (IGD) [64] and Hypervolume (HV) [24] which can simultaneously quantify the performance in convergence and diversity of the algorithms, are adopted in these experiments. The results generated by these compared algorithms are normalized to priori to employing the performance indicators, which is in the same manner [65]. In addition, reference points are uniformly sampled from Equation 2 for the calculation of IGD, and is specified as the reference point for the calculation of HV. Furthermore, Monte Carlo simulation [12] is applied for the calculation of HV when , otherwise the exact approach proposed in [66] is utilized due to the computation cost dramatically increasing as the number of objectives grows.
IvC Parameter Settings
In this subsection, the parameter settings are presented. First, the general settings for most compared algorithms are listed. Thereafter, special settings for partial algorithms are specified.
IvC1 Crossover and Mutation
SBX [67] and polynomial mutations [68] are employed as the crossover operator and mutation operator, respectively. Furthermore, the probabilities for SBX and polynomial mutation, and the crossover distribution index are set to be , ( is the number of decision variables), and , respectively. In addition, the distribution index of NSGAIII is set to be according to the suggestions in [14] while others are set to be .
IvC2 Population Size
The population size can be set arbitrarily for executing experiments. However, reference vectors assisted algorithms, such as NSGAIII, require the same number of the population size to that of reference vectors, other peer algorithms adopt the same population size for a fair comparison. Furthermore, only boundary reference vectors are generated when the division numbers is less than in the phase of sampling of reference vectors. To this end, the twolayer approach [14] is employed for generating the reference vectors. In addition, the implementation of GrEA and NSGAIII require the population size to be a multiple of . In summary, the settings for reference vector and population size are listed in Table I.
# of division  # of reference  population size of  
vectors  GrEA and NSGAIII  
3  14    120  120 
5  5    126  128 
8  3  2  156  156 
10  2  2  110  112 
15  2  1  135  136 
IvC3 Special Settings
The grid sizes of GrEA varying in are tested individually and the best scores selected based on the corresponding performance indicators are picked up for comparisons. Because RMMEDA is originally designed for MOEAs, the default configurations are not suitable for solving MaOPs in this experiments. Consequently, the parameter setting in RMMEDA is slightly modified for maximizing its performance to deal with MaOPs. Specifically, the number of clusters in local PCA varies in ; the maximum iterations of local PCA with are tested individually with the maximum generations at for selecting the best mean value indicated by performance metrics, while the parameters settings in MBNEDA are set based on the developers’ suggestion in [21]. In addition, both the thresholds for dimension reduction and model building are set to be according to the convention of the community. Furthermore the neighbor size is specified as for the investigation in Subsection IVE, and the enlargement factor is set to be . In the settings of the proposed algorithm, the generation number of PCSEA for obtaining training data is specified as , and the population size is set to be .
The proposed MaOEDAIR is based on the training data obtained by PCSEA. For a fair comparison, the termination criterion of MaOEDAIR should be set to the total function evaluation number of the peer competitors minus that of PCSEA. Table II^{4}^{4}4These settings apply only to the experimental results in Subsection IVD. For other experiments, the terminated criteria for MaOEDAIR and PCSEA are specified as generations, and the population size for PCSEA is set to be for and for . shows these particular settings for each considered number of objective.
total evaluation  evaluation numbers  evaluation numbers  

numbers  for PCSEA  for MaOEDAIR  
3  7.2E+4  1.5E+4  5.7E+4 
5 
1.3E+5  2.5E+4  1.0E+5 
8 
2.5E+5  4.0E+4  2.1E+5 
10 
2.2E+5  5.0E+4  1.7E+5 
15 
4.1E+5  7.5E+4  3.3E+5 
IvD Performance on DTLZ and DTLZ
The HV results of the proposed MaOEDAIR against its peer competitors (NSGAIII, MOEA/D, GrEA, HypE, MBNEDA, and RMMEDA) on DTLZ1DTLZ4 and DTLZ1DTLZ4 test problems with , , , , and objective are presented in Table III. Furthermore, each compared algorithm is independently performed runs and the best median HV results are highlighted in bold face. Moreover, the MannWhitneyWilcoxon ranksum test [69] with a significance level is used to conduct the HV results due to the heuristic nature of peer algorithms, and the symbols “+,” “=,” and “” denote whether the HV results of the proposed MaOEDAIR are statistically better than, equal to, or worse than those of the corresponding peer competitors. In addition, the last row in Table III summarizes how many times the proposed MaOEDAIR is better than, equal to, or worse than its respective peer competitor.
The results in Table III indicate that the proposed MaOEDAIR obtains the best performances on the DTLZ4, DTLZ1, and DTLZ3 test problems with all considered objective numbers except for objective DTLZ1 and objective DTLZ3 which are worse than GrEA. Moreover, MaOEDAIR is superior to others over DTLZ3 with  and objective while is inferior to GrEA with objective, RMMEDA with objective, and NSGAIII with objective. Although the performance of MaOEDAIR in DTLZ2 and DTLZ4 are worse than those of NSGAIII and GrEA on the , and objective, respectively, the proposed MaOEDAIR performs better than others on the objective. In addition, the proposed MaOEDAIR outperforms peer competitors on DTLZ1 and DTLZ2 test problems with , and objective. In summary, the HV results of the proposed MaOEDAIR against selected compared algorithms over eight test problems with , , , , and objective indicate that MaOEDAIR has a comparable performance by winning test scores out of comparisons, and performing equally well to comparisons.
Problem  MaOEDAIR  NSGAIII  MOEA/D  GrEA  HypE  MBNEDA  RMMEDA  

DTLZ1  3  0.838(2.1E2)  0.915(5.6E2)()  0.816(1.6E3)(+)  0.925(8.2E2)()  0.906(6.3E2)()  0.999(9.8E4)()  0.835(2.5E2)(=) 
5  0.959(3.3E3)  0.986(8.9E3)()  0.936(2.6E3)(+)  0.989(8.0E3)()  0.973(3.0E2)()  0.970(6.9E3)()  0.980(9.5E5)()  
8  0.997(6.1E4)  0.996(3.1E3)(=)  0.202(1.1E1)(+)  0.988(5.0E2)(+)  0.982(1.8E2)(+)  0.984(3.5E3)(+)  0.997(2.4E3)(=)  
10  0.991(4.2E4)  0.990(4.0E3)(+)  0.047(6.7E2)(+)  0.971(2.4E2)(+)  0.937(5.1E2)(+)  0.984(4.1E3)(+)  0.992(5.2E3)()  
15  0.997(2.2E4)  0.994(6.1E3)(+)  0.153(1.5E1)(+)  0.968(3.0E2)(+)  0.930(4.3E2)(+)  0.992(2.3E3)(+)  0.993(4.2E3)(+)  
DTLZ2  3  0.533(1.2E3)  0.645(4.7E2)()  0.540(2.1E3)()  0.543(1.9E3)()  0.361(2.9E2)(+)  0.563(4.5E2)()  0.566(1.6E3)() 
5  0.780(5.5E3)  0.956(1.1E2)()  0.710(1.8E3)(+)  0.795(1.2E3)()  0.494(6.4E2)(+)  0.739(4.0E2)(+)  0.795(1.0E2)()  
8  0.925(6.7E4)  0.924(4.3E3)(+)  0.925(8.7E3)(=)  0.904(2.8E3)(+)  0.543(8.6E2)(+)  0.842(3.9E2)(+)  0.020(1.1E2)(+)  
10  0.932(9.3E3)  0.931(1.0E2)(+)  0.002(4.1E3)(+)  0.922(3.7E3)(+)  0.443(7.2E2)(+)  0.879(3.1E2)(+)  0.880(1.8E2)(+)  
15  0.979(4.2E3)  0.982(4.4E4)()  0.066(1.1E1)(+)  0.888(1.8E2)(+)  0.404(7.2E2)(+)  0.922(1.9E2)(+)  0.876(1.6E2)(+)  
DTLZ3  3  0.513(2.7E3)  0.986(3.0E2)()  0.574(7.3E2)()  0.979(2.2E4)()  0.816(1.7E1)()  0.680(4.6E2)()  0.977(7.1E3)() 
5  0.989(3.6E4)  0.987(1.8E2)(+)  0.704(2.8E2)(+)  0.794(1.7E3)(+)  0.995(3.7E3)()  0.862(3.1E2)(+)  0.996(1.4E3)()  
8  0.997(2.6E5)  0.994(6.5E3)(+)  0.313(1.9E1)(+)  0.923(5.6E4)(+)  0.995(6.2E3)(+)  0.921(1.5E2)(+)  0.949(8.9E3)(+)  
10  0.989(2.0E5)  0.984(1.4E2)(+)  0.096(1.4E1)(+)  0.943(8.2E4)(+)  0.988(1.0E2)(+)  0.929(1.4E2)(+)  0.898(1.9E2)(+)  
15  0.975(4.9E4)  0.997(3.0E3)()  0.079(9.3E2)(+)  0.902(2.2E2)(+)  0.985(1.2E2)()  0.963(4.1E3)(+)  0.887(1.6E2)(+)  
DTLZ4  3  0.733(2.9E4)  0.521(9.2E2)(+)  0.445(1.0E1)(+)  0.476(1.7E1)(+)  0.471(1.0E1)(+)  0.565(2.1E3)(+)  0.538(3.2E3)(+) 
5  0.920(6.9E3)  0.798(9.0E3)(+)  0.614(5.9E2)(+)  0.785(3.0E2)(+)  0.585(6.6E2)(+)  0.795(1.1E3)(+)  0.726(2.8E2)(+)  
8  0.938(5.2E3)  0.923(2.3E3)(+)  0.065(5.7E2)(+)  0.916(2.1E3)(+)  0.521(9.2E2)(+)  0.928(9.5E4)(+)  0.859(1.4E2)(+)  
10  0.953(6.2E5)  0.943(2.3E3)(+)  0.011(2.1E2)(+)  0.936(1.3E3)(+)  0.449(9.4E2)(+)  0.949(9.7E4)(+)  0.897(2.1E2)(+)  
15  0.997(3.2E5)  0.991(5.5E4)(+)  0.009(1.9E2)(+)  0.936(9.6E3)(+)  0.546(9.1E2)(+)  0.991(2.3E4)(+)  0.985(9.4E3)(+)  
DTLZ1 
3  0.289(3.1E3)  0.217(3.2E3)(+)  0.208(3.2E3)(+)  0.227(1.3E3)(+)  0.121(1.0E2)(+)  0.118(1.6E2)(+)  0.255(1.6E2)(+) 
5  0.010(2.7E3)  0.019(3.8E3)()  0.007(4.1E4)(+)  0.199(9.0E3)()  0.001(1.0E4)(+)  0.019(3.7E3)()  0.011(1.0E3)(=)  
8  0.112(2.9E2)  0.001(5.2E4)(+)  0.000(1.2E5)(+)  0.000(1.4E5)(+)  0.000(3.1E6)(+)  0.001(0.0E+0)(+)  0.000(2.6E5)(+)  
10  0.099(3.6E4)  0.000(1.7E4)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(6.6E6)(+)  
15  0.102(6.6E3)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  
DTLZ2  3  0.533(7.2E2)  0.541(1.0E2)()  0.527(1.8E3)(+)  0.587(2.3E2)()  0.465(2.5E2)(+)  0.340(2.3E2)(+)  0.531(6.1E3)(+) 
5  0.067(3.3E5)  0.140(2.0E2)()  0.078(2.0E3)()  0.258(3.1E2)()  0.008(1.3E3)(+)  0.144(2.2E2)()  0.067(9.7E3)(=)  
8  0.102(5.2E3)  0.012(4.5E3)(+)  0.000(4.5E5)(+）  0.001(8.4E5)(+)  0.000(1.6E5)(+)  0.017(3.4E3)(+)  0.001(2.1E4)(+)  
10  0.033(3.6E2)  0.004(2.4E3)(+)  0.000(0.0E+0)(+)  0.000(1.5E5)(+)  0.000(0.0E+0)(+)  0.002(5.7E5)(+)  0.000(2.4E5)(+)  
15  0.003(3.2E3)  0.000(4.2E5)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  
DTLZ3  3  0.553(1.9E2)  0.539(1.1E2)(+)  0.535(1.7E2)(+)  0.540(1.7E3)(+)  0.483(1.5E2)(+)  0.242(2.9E2)(+)  0.547(2.4E2)(+) 
5  0.230(3.3E2)  0.131(1.9E2)(+)  0.119(1.1E2)(+)  0.071(9.0E4)(+)  0.074(3.6E3)(+)  0.105(1.2E2)(+)  0.087(8.6E3)(+)  
8  0.001(3.2E3)  0.012(3.0E3)()  0.001(1.8E4)(=)  0.102(1.2E2)()  0.001(2.3E4)(=)  0.014(4.0E3)()  0.001(2.2E4)(=)  
10  0.030(3.0E3)  0.003(2.0E3)(+)  0.000(6.9E6)(+)  0.000(1.4E5)(+)  0.000(2.4E5)(+)  0.003(1.1E4)(+)  0.000(2.8E5)(+)  
15  0.003(2.1E5)  0.000(1.6E4)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  
DTLZ4  3  0.531(2.8E3)  0.533(6.4E3)()  0.528(2.0E3)(+)  0.621(3.2E2)()  0.492(1.4E2)(+)  0.170(1.2E2)(+)  0.511(1.2E2)(+) 
5  0.060(3.9E3)  0.076(1.5E2)()  0.071(2.1E3)()  0.329(3.4E2)()  0.007(1.4E3)(+)  0.005(1.1E3)(+)  0.014(2.7E3)(+)  
8  0.001(6.3E3)  0.002(1.0E3)()  0.000(2.4E5)(+)  0.124(1.1E2)()  0.002(7.9E6)()  0.000(0.0E+0)(+)  0.001(1.5E5)(=)  
10  0.033(3.2E3)  0.000(2.9E4)(+)  0.000(0.0E+0)(+)  0.000(3.1E6)(+)  0.000(2.2E6)(+)  0.000(0.0E+0)(+)  0.001(2.2E6)(+)  
15  0.005(6.2E2)  0.003(3.8E5)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  0.000(0.0E+0)(+)  
+/=/  25/1/14  34/2/4  28/0/12  33/1/6  33/0/7  28/6/6 
IvE Investigation on Neighbor Size
To investigate how the neighbor size affecting the performance of the proposed MaOEDAIR, a series of experiments is performed by varying in with an interval of . Specifically, the results measured by IGD on DTLZ1 test problems with considered objective numbers are plotted in Fig. 5 in which it is clearly shown that appreciable changes have been taken place in with smaller numbers and gradually remain steady as increases. It is interpreted that the neighbor solutions with one particular reference vector for building the model are from other reference vectors when is with a smaller number, which causes the inaccuracy of the built model based on which new solutions are generated that led to deteriorating ues. Especially, most IGD values remain level when in Fig. 5 (it is actually applicable to other tested benchmark problems based on the investigations.), and the with larger size will increase the computational cost by introducing more initialized solutions. As a consequence, is specified as in our experiments.
IvF Investigation on Diversity Repairing Mechanism
As discussed in Subsection IIIJ, both the diversity and convergence have been improved with the diversity repairing mechanism. To this end, experimental comparisons on the test problems with and without the diversity repairing mechanism are performed. Specifically, the IGD values and the HV values of the evolution trajectory results generated by  and objective DTLZ2 test problems over generations are illustrated in Figs. (a)a, (b)b, (a)a, and (b)b, respectively. In these figures, the red lines denote the results without the diversity repairing mechanism, while the blue lines refer to those with the diversity repairing. To be specific, both the IGD values of objective DTLZ2 with and without the diversity repairing mechanism sharply decrease during the first generations then gradually remain stable, while those of objective DTLZ2 smoothly decline throughout the entire evolution. For both the HV values of objective DTLZ2 with and without the diversity repairing mechanism, they grow substantially before the th generation then go up moderately as the evolution continues, while the ones resulted by the proposed algorithm without the diversity repairing mechanism stay lower than those with the diversity repairing mechanism during the entire evolution. In summary, both the best IGD results in Figs. (a)a and (b)b and the HV results in Figs. (a)a and (b)b demonstrate the promising performance of the proposed MaOEDAIR when the diversity repairing mechanism is employed.
Problem  Diversity Repairing  

With  Without  
DTLZ1  3  0.847(1.2E3)  0.843(1.2E3)(=) 
5  0.975(4.5E4)  0.933(6.6E2)(+)  
8  0.997(1.7E4)  0.844(5.6E2)(+)  
10  0.997(1.5E4)  0.590(7.5E2)(+)  
15  0.999(1.2E4)  0.727(9.7E3)(+)  
DTLZ2  3  0.567(2.8E3)  0.503(4.3E2)(+) 
5  0.796(4.0E3)  0.709(5.0E2)(+)  
8  0.923(9.6E4)  0.811(9.2E3)(+)  
10  0.943(7.2E4)  0.924(1.5E2)(+)  
15  0.987(1.0E3)  0.933(1.2E3)(+)  
DTLZ3  3  0.566(1.3E3)  0.558(1.0E3)(=) 
5  0.999(4.1E6)  0.777(3.1E2)(+)  
8  0.997(5.5E6)  0.785(2.0E2)(+)  
10  0.998(2.8E3)  0.554(6.3E2)(+)  
15  0.982(6.1E4)  0.652(1.7E4)(+)  
DTLZ4  3  0.750(8.3E2)  0.745(3.9E3)(=) 
5  0.937(1.7E2)  0.925(3.4E2)(+)  
8  0.991(2.9E3)  0.881(5.5E2)(+)  
10  0.992(3.8E3)  0.813(4.9E2)(+)  
15  0.999(6.1E4)  0.900(3.7E5)(+)  
DTLZ1 
3  0.338(3.9E3)  0.285(3.1E2)(+) 
5  0.011(3.3E4)  0.002(5.3E2)(+)  
8  0.142(5.1E3)  0.129(3.0E3)(+)  
10  0.124(7.0E3)  0.042(5.1E2)(+)  
15  0.102(6.7E3)  0.010(2.8E3)(+)  
DTLZ2  3  0.541(1.4E3)  0.443(9.0E2)(+) 
5  0.071(9.6E4)  0.063(3.0E2)(+)  
8  0.103(1.5E2)  0.014(9.6E2)(+)  
10  0.038(7.7E3)  0.006(2.9E2)(+)  
15  0.003(8.9E4)  0.000(0.0E0)(=)  
DTLZ3  3  0.595(2.1E2)  0.518(9.1E2)(+) 
5  0.255(3.0E2)  0.240(6.1E2)(+)  
8  0.001(6.5E5)  0.000(0.0E0)(=)  
10  0.036(7.2E3)  0.013(1.1E4)(+)  
15  0.003(9.0E4)  0.000(0.0E0)(=)  
DTLZ4  3  0.540(1.2E3)  0.472(9.7E4)(+) 
5  0.069(7.7E4)  0.031(9.7E2)(+)  
8  0.001(5.9E5)  0.000(0.0E0)(=)  
10  0.043(8.0E3)  0.010(5.5E2)(+)  
15  0.005(1.5E3)  0.000(0.0E0)(=)  
+/=/  33/7/0 
Furthermore, Table IV shows the extensive experimental comparisons between the proposed algorithm with and without the diversity repairing mechanism. Specifically, these experiments are independently performed runs over each test problem, and their results are measured by HV. Moreover, the best median HV results are highlighted in bold face, and the symbols “+,” “=,” and “” denote whether the HV results of the proposed algorithm with the diversity repairing mechanism are statistically better than, equal to, or worse than those of the proposed algorithm without the diversity repairing mechanism with a significant level , respectively. In addition, the last row in Table 3 summarizes how many times the proposed algorithm with the diversity repairing mechanism are better than, equal to, or worse than itself without this mechanism. It is clearly shown in Table IV that when the diversity repairing mechanism is employed, the proposed algorithm obtains all the best median HV results and most best statistical results against its competitors over DTLZ1DTLZ4, and DTLZ1DTLZ4 with , , , , and objective, while the proposed algorithm without the diversity repairing mechanism could not even obtain effective solutions over DTLZ2 with objective, DTLZ3 with 8 and 15objective, and DTLZ4 with 8objective (i.e., the HV results upon these generated solutions are approximately zeros, which is caused by the domination by the employed reference points for the calculation of HV). In addition, it also can be observed that the diversity repairing mechanism may not significantly improve the performance in solving MOPs where the phenomenon of diversity losing is not severe. For example, the proposed algorithm obtains the same statistical results over 3objective DTLZ1, DTLZ3, and DTLZ4 test problems no matter if the diversity repairing mechanism is employed. In summary, the diversity repairing mechanism can significantly improve the performance of the proposed algorithm especially in solving MaOPs.
IvG Investigation on Dimension Reduction
It is expected that dimension reduction in the decision variable space is capable of reducing the computational complexity of the proposed MaOEDAIR. In this situation, two types of experiments are to be performed in order to draw any meaningful conclusion. The first one is to measure the performance of the solutions generated by the proposed algorithm with and without the dimension reduction within the same generation numbers. The other one is to compare the generation numbers when the same performance is achieved by the proposed algorithm with and without the dimension reduction. In the following, the first experimental results would be shown, while the second experimental comparisons are presented in Supplemental Materials. Specifically, the generation number of the first experiment is adopted from the parameter settings in Subsection IVC (i.e., ). The experimental comparisons are independently performed runs by the proposed algorithm with and without the dimension reduction over DTLZ1DTLZ4 and DTLZ1DTLZ4 with , , , , and objective. Then their results are measured by HV and shown in Table V where the best median HV results are highlighted in bold face, and the symbols “+,” “=,” and “” denote whether the HV results of the proposed algorithm with the dimension reduction are statistically better than, equal to, or worse than that of the proposed algorithm without the dimension reduction with a significant level , respectively. Furthermore, the last row in Table V summarizes how many times the proposed algorithm with the dimension reduction are better than, equal to, or worse than itself without this technique. It is obvious from Table V that the proposed algorithm obtains the significant performance improvement when the dimension reduction is employed. Furthermore, without the dimension reduction, the proposed algorithm cannot perform well over several test problems, such as the DTLZ1 with objective, DTLZ2 with  and objective, and DTLZ3 as well as DTLZ4 with , , and objective (their HV results are zeros). In summary, the proposed algorithm shows its superiority when the dimension reduction is utilized.
Problem  Dimension Reduction  

With  Without  
DTLZ1  3  0.847(1.2E3)  0.776(3.2E3)(+) 
5  0.975(4.5E4)  0.969(3.4E4)(+)  
8  0.997(1.7E4)  0.902(4.4E3)(+)  
10  0.997(1.5E4)  0.948(6.6E2)(+)  
15  0.999(1.2E4)  0.954(1.6E2)(+)  
DTLZ2  3  0.567(2.8E3)  0.555(2.6E2)(+) 
5  0.796(4.0E3)  0.746(5.1E2)(+)  
8  0.923(9.6E4)  0.907(7.0E3)(+)  
10  0.943(7.2E4)  0.909(8.9E2)(+)  
15  0.987(1.0E3)  0.828(9.6E3)(+)  
DTLZ3  3  0.566(1.3E3)  0.511(2.4E5)(+) 
5  0.999(4.1E6)  0.985(9.3E3)(+)  
8  0.997(5.5E6)  0.912(3.5E4)(+)  
10  0.998(2.8E3)  0.328(2.0E2)(+)  
15  0.982(6.1E4)  0.928(2.5E4)(+)  
DTLZ4  3  0.750(8.3E2)  0.688(2.9E4)(+) 
5  0.937(1.7E2)  0.890(7.6E2)(+)  
8  0.991(2.9E3)  0.956(7.5E3)(+)  
10  0.992(3.8E3)  0.109(3.8E4)(+)  
15  0.999(6.1E4)  0.940(5.7E4)(+)  
DTLZ1 
3  0.338(3.9E3)  0.330(4.7E2)(+) 
5  0.011(3.3E4)  0.000(0.0E0)(=)  
8  0.142(5.1E3)  0.089(3.4E2)(+)  
10  0.124(7.0E3)  0.046(1.6E4)(+)  
15  0.102(6.7E3)  0.009(7.9E3)(+)  
DTLZ2  3  0.541(1.4E3)  0.510(7.5E4)(+) 
5  0.071(9.6E4)  0.018(4.5E2)(+)  
8  0.103(1.5E2)  0.086(8.4E3)(+)  
10  0.038(7.7E3)  0.000(0.0E0)(=)  
15  0.003(8.9E4)  0.000(0.0E0)(=)  
DTLZ3  3  0.595(2.1E2)  0.580(9.6E3)(+) 
5  0.255(3.0E2)  0.172(4.6E5)(+)  
8  0.001(6.5E5)  0.000(0.0E0)(=)  
10  0.036(7.2E3)  0.000(0.0E0)(=)  
15  0.003(9.0E4)  0.000(0.0E0)(=)  
DTLZ4  3  0.540(1.2E3)  0.532(2.6E3)(+) 
5  0.069(7.7E4)  0.029(1.5E3)(+)  
8  0.001(5.9E5)  0.000(0.0E0)(=)  
10  0.043(8.0E3)  0.000(0.0E0)(=)  
15  0.005(1.5E3)  0.000(0.0E0)(=)  
+/=/  31/9/0 
V Conclusion
In solving manyobjective optimization problems, the performance of most multiobjective evolutionary algorithms often deteriorate appreciably because of the loss of selection pressure during the evolution process. This is largely due to the selected parents solutions not generating promising individuals with the conventional genetic operators to direct the search towards the Paretooptimal front. An improved regularitybased estimation of distribution algorithm, which generates new solutions with a probabilistic model built from the solutions the algorithm has visited, is proposed in this paper. To be specific, the proposed algorithm made an innovation in the following aspects: 1) devising a diversity repairing mechanism to reduce the risk of dominance resistant solutions and 2) generating promising solutions with the statistics of regularity learnt from the neighboring solutions with respect to the representatives which are uniformly distributed in the objective space. These two steps works in conjunction with each other to direct the search towards Paretooptimal front. In addition, dimension reduction technique is utilized to reduce the cost of exploitation and exploration. Furthermore, in addition to the investigations are performed on the diversity repairing and dimension reduction, investigation is also performed based on the size of neighbors affecting the performance of the proposed algorithm to give the guideline for decisionmarker. Extensive experiments are performed and the results measured by the chosen performance metrics indicate that the proposed algorithm shows superiority in tackling manyobjective optimization problems. In our future research, we will extend the proposed algorithm to deal with highly constrained manyobjective optimization problems in which complicated regularity of Pareto fronts often exists.
References
 [1] M. Farina and P. Amato, “On the optimal solution definition for manycriteria optimization problems,” in Proceedings of the NAFIPSFLINT international conference, 2002, pp. 233–238.
 [2] O. Chikumbo, E. Goodman, and K. Deb, “Approximating a multidimensional pareto front for a land use management problem: A modified moea with an epigenetic silencing metaphor,” in Evolutionary Computation (CEC), 2012 IEEE Congress on. IEEE, 2012, pp. 1–9.
 [3] R. J. Lygoe, M. Cary, and P. J. Fleming, “A realworld application of a manyobjective optimisation complexity reduction process,” in Evolutionary MultiCriterion Optimization. Springer, 2013, pp. 641–655.
 [4] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsgaii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
 [5] E. Zitzler, L. Marco, and T. Lothar, “Spea2: Improving the strength pareto evolutionary algorithm,” 2001.
 [6] J. Knowles and D. Corne, “Quantifying the effects of objective space dimension in evolutionary multiobjective optimization,” in Evolutionary multicriterion optimization. Springer, 2007, pp. 757–771.
 [7] C. M. Fonseca and P. J. Fleming, “Multiobjective optimization and multiple constraint handling with evolutionary algorithms. i. a unified formulation,” IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, vol. 28, no. 1, pp. 26–37, 1998.
 [8] R. C. Purshouse and P. J. Fleming, “On the evolutionary optimization of many conflicting objectives,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 770–784, 2007.
 [9] B. Li, J. Li, K. Tang, and X. Yao, “Manyobjective evolutionary algorithms: A survey,” ACM Computing Surveys (CSUR), vol. 48, no. 1, p. 13, 2015.
 [10] T. Wagner, N. Beume, and B. Naujoks, “Pareto, aggregation, and indicatorbased methods in manyobjective optimization,” in Evolutionary multicriterion optimization. Springer, 2007, pp. 742–756.
 [11] Q. Zhang and H. Li, “Moea/d: A multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.
 [12] J. Bader and E. Zitzler, “Hype: An algorithm for fast hypervolumebased manyobjective optimization,” Evolutionary Computation, vol. 19, no. 1, pp. 45–76, 2011.
 [13] S. Yang, M. Li, X. Liu, and J. Zheng, “A gridbased evolutionary algorithm for manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 5, pp. 721–736, 2013.
 [14] K. Deb and H. Jain, “An evolutionary manyobjective optimization algorithm using referencepointbased nondominated sorting approach, part i: Solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014.
 [15] A. Trivedi, D. Srinivasan, K. Sanyal, and A. Ghosh, “A survey of multiobjective evolutionary algorithms based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 3, pp. 440–462, 2017.
 [16] Y. Sun, G. G. Yen, and Z. Yi, “Global viewbased selection mechanism for manyobjective evolutionary algorithms,” in 2017 IEEE Congress on Evolutionary Computation. IEEE, 2017, pp. 427–434.
 [17] ——, “Reference linebased estimation of distribution algorithm for manyobjective optimization,” KnowledgeBased Systems, vol. 132, pp. 129–143, 2017.
 [18] K. Deb, A. Sinha, and S. Kukkonen, “Multiobjective test problems, linkages, and evolutionary methodologies,” in Proceedings of the 8th annual conference on Genetic and evolutionary computation. ACM, 2006, pp. 1141–1148.
 [19] M. Pelikan, K. Sastry, and D. E. Goldberg, “Multiobjective estimation of distribution algorithms,” in Scalable optimization via probabilistic modeling. Springer, 2006, pp. 223–248.
 [20] Q. Zhang, A. Zhou, and Y. Jin, “Rmmeda: A regularity modelbased multiobjective estimation of distribution algorithm,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 41–63, 2008.
 [21] H. Karshenas, R. Santana, C. Bielza, and P. Larranaga, “Multiobjective estimation of distribution algorithm based on joint modeling of objectives and variables,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 519–542, 2014.
 [22] N. Khan, D. E. Goldberg, and M. Pelikan, “Multiobjective bayesian optimization algorithm,” Urbana, vol. 51, p. 61801, 2002.
 [23] J. Schwarz and J. Ocenasek, “Multiobjective bayesian optimization algorithm for combinatorial problems: Theory and practice,” Neural Network World, vol. 11, no. 5, pp. 423–442, 2001.
 [24] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach,” IEEE Transactions on evolutionary computation, vol. 3, no. 4, pp. 257–271, 1999.
 [25] M. Laumanns and J. Ocenasek, “Bayesian optimization algorithms for multiobjective optimization,” in Parallel Problem Solving from Nature—PPSN VII. Springer, 2002, pp. 298–307.
 [26] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence and diversity in evolutionary multiobjective optimization,” Evolutionary computation, vol. 10, no. 3, pp. 263–282, 2002.
 [27] X. Zhong and W. Li, “A decisiontreebased multiobjective estimation of distribution algorithm,” in Computational Intelligence and Security, 2007 International Conference on. IEEE, 2007, pp. 114–11.
 [28] P. A. Bosman and D. Thierens, “Multiobjective optimization with diversity preserving mixturebased iterated density estimation evolutionary algorithms,” International Journal of Approximate Reasoning, vol. 31, no. 3, pp. 259–289, 2002.
 [29] M. Costa and E. Minisci, “Moped: a multiobjective parzenbased estimation of distribution algorithm for continuous problems,” in Evolutionary MultiCriterion Optimization. Springer, 2003, pp. 282–294.
 [30] M. Pelikan, K. Sastry, and D. E. Goldberg, “Multiobjective hboa, clustering, and scalability,” in Proceedings of the 7th annual conference on Genetic and evolutionary computation. ACM, 2005, pp. 663–670.
 [31] K. Sastry, D. E. Goldberg, and M. Pelikan, “Limits of scalability of multiobjective estimation of distribution algorithms,” in Evolutionary Computation, 2005. The 2005 IEEE Congress on, vol. 3. IEEE, 2005, pp. 2217–2224.
 [32] A. Zhou, Q. Zhang, and Y. Jin, “Approximating the set of paretooptimal solutions in both the decision and objective spaces by an estimation of distribution algorithm,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 1167–1189, 2009.
 [33] V. Cherkassky and F. M. Mulier, Learning from data: concepts, theory, and methods. John Wiley & Sons, 2007.
 [34] T. Hastie, R. Tibshirani, J. Friedman, and J. Franklin, “The elements of statistical learning: data mining, inference and prediction,” The Mathematical Intelligencer, vol. 27, no. 2, pp. 83–85, 2005.
 [35] Z. He and G. G. Yen, “Visualization and performance metric in manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 3, pp. 386–402, 2016.
 [36] J. Cheng, G. G. Yen, and G. Zhang, “A manyobjective evolutionary algorithm with enhanced mating and environmental selections,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 4, pp. 592–605, 2015.
 [37] A. Zhou, Q. Zhang, Y. Jin, E. Tsang, and T. Okabe, “A modelbased evolutionary algorithm for biobjective optimization,” in Evolutionary Computation, 2005. The 2005 IEEE Congress on, vol. 3. IEEE, 2005, pp. 2568–2575.
 [38] A. Zhou, Y. Jin, Q. Zhang, B. Sendhoff, and E. Tsang, “Combining modelbased and geneticsbased offspring generation for multiobjective optimization using a convergence criterion,” in Evolutionary Computation, 2006. CEC 2006. IEEE Congress on. IEEE, 2006, pp. 892–899.
 [39] T. Gal and T. Hanne, “Consequences of dropping nonessential objectives for the application of mcdm methods,” European Journal of Operational Research, vol. 119, no. 2, pp. 373–378, 1999.
 [40] D. Brockhoff and E. Zitzler, “Automated aggregation and omission of objectives for tackling manyobjective problems,” in New Developments in Multiple Objective and Goal Programming. Springer, 2010, pp. 81–102.
 [41] ——, “Objective reduction in evolutionary multiobjective optimization: Theory and applications,” Evolutionary Computation, vol. 17, no. 2, pp. 135–166, 2009.
 [42] A. López Jaimes, C. A. Coello Coello, and D. Chakraborty, “Objective reduction using a feature selection technique,” in Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation. ACM, 2008, pp. 673–680.
 [43] D. K. Saxena and K. Deb, “Nonlinear dimensionality reduction procedures for certain largedimensional multiobjective optimization problems: employing correntropy and a novel maximum variance unfolding,” in International Conference on Evolutionary MultiCriterion Optimization. Springer, 2007, pp. 772–787.
 [44] D. Brockhoff, D. K. Saxena, K. Deb, and E. Zitzler, “On handling a large number of objectives a posteriori and during optimization,” in Multiobjective Problem Solving from Nature. Springer, 2008, pp. 377–403.
 [45] D. K. Saxena, J. A. Duro, A. Tiwari, K. Deb, and Q. Zhang, “Objective reduction in manyobjective optimization: Linear and nonlinear algorithms,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 1, pp. 77–99, 2013.
 [46] X. Guo, X. Wang, M. Wang, and Y. Wang, “A new objective reduction algorithm for manyobjective problems: employing mutual information and clustering algorithm,” in 2012 Eighth International Conference on Computational Intelligence and Security. IEEE, 2012, pp. 11–16.
 [47] H. Wang and X. Yao, “Objective reduction based on nonlinear correlation information entropy,” Soft Computing, vol. 20, no. 6, pp. 2393–2407, 2016.
 [48] H. K. Singh, A. Isaacs, and T. Ray, “A pareto corner search evolutionary algorithm and dimensionality reduction in manyobjective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 4, pp. 539–556, 2011.
 [49] N. Kambhatla and T. K. Leen, “Dimension reduction by local principal component analysis,” Neural Computation, vol. 9, no. 7, pp. 1493–1516, 1997.
 [50] H. Wang, Q. Zhang, L. Jiao, and X. Yao, “Regularity model for noisy multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 9, pp. 1997–2009, 2016.
 [51] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, Scalable test problems for evolutionary multiobjective optimization. Springer, 2005.
 [52] P. Fleming, “Computer aided control systems using a multiobjective optimization approach,” in Proc. IEEE Control’85 Conference, 1985, pp. 174–179.
 [53] Y. Jin, T. Okabe, and B. Sendho, “Adapting weighted aggregation for multiobjective evolution strategies,” in International Conference on Evolutionary MultiCriterion Optimization. Springer, 2001, pp. 96–110.
 [54] I. Das and J. E. Dennis, “Normalboundary intersection: A new method for generating the pareto surface in nonlinear multicriteria optimization problems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657, 1998.
 [55] Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing convergence and diversity in decompositionbased manyobjective optimizers,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 180–198, 2015.
 [56] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791, 2016.
 [57] K. Deb, K. Miettinen, and D. Sharma, “A hybrid integrated multiobjective optimization procedure for estimating nadir point,” in Evolutionary MultiCriterion Optimization. Springer, 2009, pp. 569–583.
 [58] K. Deb, S. Chaudhuri, and K. Miettinen, “Towards estimating nadir objective vector using evolutionary approaches,” in Proceedings of the 8th annual conference on Genetic and evolutionary computation. ACM, 2006, pp. 643–650.
 [59] H. Wang, S. He, and X. Yao, “Nadir point estimation for manyobjective optimization problems based on emphasized critical regions,” Soft Computing, pp. 1–13, 2015.
 [60] Y. Yuan, H. Xu, and B. Wang, “Evolutionary manyobjective optimization using ensemble fitness ranking,” in Proceedings of the 2014 conference on Genetic and evolutionary computation. ACM, 2014, pp. 669–676.
 [61] Q. Zhang, A. Zhou, Y. Jin et al., “Modelling the regularity in estimation of distribution algorithm for continuous multiobjective evolutionary optimization with variable linkages,” IEEE Transactions on Evolutionary Computation, vol. 5, pp. 1–40, 2006.
 [62] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: Empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000.
 [63] H. Ishibuchi, Y. Setoguchi, H. Masuda, and Y. Nojima, “Performance of decompositionbased manyobjective algorithms strongly depends on pareto front shapes,” IEEE Transactions on Evolutionary Computation, 2016.
 [64] P. A. Bosman and D. Thierens, “The balance between proximity and diversity in multiobjective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 174–188, 2003.
 [65] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relationbased evolutionary algorithm for manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2016.
 [66] L. While, L. Bradstreet, and L. Barone, “A fast way of calculating exact hypervolumes,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 86–95, 2012.
 [67] K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex systems, vol. 9, no. 2, pp. 115–148, 1995.
 [68] K. Deb, Multiobjective Optimization Using Evolutionary Algorithms. John Wiley & Sons, 2001, vol. 16.
 [69] S. Robert, J. Torrie, and D. Dickey, “Principles and procedures of statistics: a biometrical approach,” 1997.
Appendix A
In this section, experiments are designed to verify the performance of the dimension reduction in the proposed algorithm by counting the generation numbers when the same performance appears for the first time. Specifically, we first record the generation number when the baseline results^{5}^{5}5We first employ PCSEA to obtain the training data by setting its population size to be for and for , and then perform MaOEDAIR to get the final solutions. The maximal generation numbers for PCSEA and MaOEDAIR are set to be . The HV of the final solutions are considered as the baseline results. are reached for the first time. Then reperform the proposed algorithm without the dimension reduction until the resulting satisfy the criterion formulated by Equation (3) or the generation number is greater than .
(3) 
where denotes the corresponding baseline results.
Fig. 8 illustrates the comparisons over the , , and objective DTLZ1DTLZ4 test problems which are denoted by AL, and the red line and blue line denote the results generated by the proposed MaOEDAIR with and without dimension reduction, respectively. The results in Fig. 8 clearly shows the generation numbers performed by the MaOEDAIR without dimension reduction are about twice as many to that of MaOEDAIR with dimension reduction except on the objective DTLZ2 test problem (denoted by “D”). In addition, MaOEDAIR without dimension reduction cannot obtain the results satisfying Equation (3) over objective DTLZ3 and DTLZ4 test problems until the maximum predefined generation number is reached (denoted by “H” and “K”, respectively). In summary, experiments in Fig. 8 validate our motivation of employing the dimension reduction in the proposed MaOEDAIR.
Comments
There are no comments yet.