Collective Modified Value at Risk in Life Insurance

Insurance is seen as a tool which individuals can transfer risks to others, where insurers collect funds from individuals to meet financial needs related to damage. Therefore analysis of risk in life insurance claims is really be needed by the insurance company actuary. In an insurance system, the risk is the event when an insured party puts forward a claim. The claim is compensation for a risk loss. The individual claim in one-period insurance is called aggregation claim while aggregation claim is a collective risk [1].


Introduction
Insurance is a tool for handling risks which is done by transferring from one party to another which in this case will be the insurance company [1]. Economically, insurance means collecting funds which can be used to close or give compensation to the person who is the one experiencing the loss [2]. Insurance is a business of taking over the risk from customer to the insurance company until the customer feels comfortable to follow the insurance program. Insurance must be reliable to handle the risk in order for the business to be profitable, which in turn makes the customer feels comfortable to follow the offered program [3]. In insurance, experience about risk is the occurrence of insured claim. Risk is generally measured by variance and standard deviation [1]. It should be highlighted that variance and standard deviations measure the average risk size and do not accommodate all of the risks, thus there is a need to find an alternative measure [4,5].
The frequency distribution of claims is described as the number of claims filed by the policyholder to the insurer. The exact frequency distribution of claims is modeled using discrete probability distributions and we choose Poisson. The discrete probability distribution is a random variable having a sample space in which elements can be registered. In this thesis, the sample space elements are nonnegative integers. The distribution of such random variables is also called counting distribution. In the field of insurance, one of the randomly distributed counting variables is a random variable that states the number of claims filed by the policyholder for a given period [6].
Insurance is closely related to risk, because with insurance products there is a transfer of risk from the policyholder to the insurer, so there is potential for the occurrence of losses (claims) that have a distribution, called the distribution of losses (risk of claims).
A research study by [7] estimated the risk model of life insurance claims for cancer patients using the Bayesian method. The research of risk measurement model done by the researchers discussed above generally used the variance approach as risk measure but it cannot solve every risk events. Therefore, it is necessary to develop an alternative risk measurement model. February 2021, Volume 2, Issue 1, 24-32 E-ISSN: 2720-9326 P-ISSN: 2716-0459

Problem Statement
As mentioned above, variance and standard deviation cannot solve all the risk events and these form the basis why this research has developed risk measures based on Collective Value at Risk (CVaR) and Collective Modified Value at Risk (CMVaR).

Research Objective
The objectives of the studies are as follows: 1.
To estimate model parameters for individual and collective risk in life insurance, where many claims are Poisson distribution.

2.
To develop models of Collective Modified Value-at-Risk in risk life insurance, for a number of claims and the distribution of the value of the claim.

3.
To compare and analyze the calculations of Collective Risk, Collective Value-at-Risk, and Collective Modified Value-at-Risk of the simulation data and value of the number of claims life insurance claims.

Theoretical Framework
The theoretical framework relation described above can be illustrated in following   actuarial studies is about loss distribution or risk claim insurance. Claim insurance (loss) must be paid in the future. The total amount depends on the terms and conditions of the agreement and matters pertaining to the event when it happens. However with changing times, and especially when the loss is random, the loss definition must be limited by a single payment (even though the payment is smaller). Following this, the next information is using the design loss analysis by aggregate.
Based on information from data of claim risk event, the separation process of insurance loss via component claim (frequency claim) and amount of claim (severity claim) will be done. This has to be done because it makes it easier to estimate distribution from both components. The composition comprising the number of claims is a discrete event, until the random variable is random variable discrete, and will follow the discrete distribution types such as Poisson, Binomial, Binomial Negative, or Geometric distribution. Meanwhile, component amount of claim (severity claim) is a continuous distribution event, until the random variable is random variable continue, and it will follow the continuous distribution as Gamma, Exponential, Lognormal or Pareto distribution.
Next, items to be considered are the component number of claims and amount of claims and each needs to determine the type of model distribution. After each component has been determined, the number and amount of claims known as the distribution model that is very useful in designing a model from aggregate risk (collective risk) in insurance must be considered. The objective of modelling risk aggregate insurance is to develop probability distribution for the total benefits which must manage the model account that will be used to fulfil the insured commitment.
Claim risk aggregate (collective risk) in insurance is generally measured based on variance or standard deviation from loss distribution. One should take note that risk is measured based on variance or standard deviation models used in this research cannot always accommodate all the claim risk events. This is due to the fact that variance and the standard deviation is an average risk measurement. Because of this matter, there is a need to develop a measurement risk model which can accommodate all events claim risk. In this research, a development measurement model claim risk insurance is presented using approach models of Collective Value-at-Risk (CVaR), and Collective Modified Value-at-Risk (CMVaR) model. The CVaR model will be developed with a normal distribution standard approach, while CMVaR model will be developed to accommodate when claim risk distribution has a non-normal distribution. The development are general and specific ones.
A general model means that it is a mathematically formulated conception which is general, or what can be used from any relevant combination distribution of number and amount of claims. On the other hand, the specific model means that a development model includes the combination distribution number of claims and the amount of the claims that have been set. The distribution combination number and amount of claims such as Poisson-Lognormal.
The results of the development model risk measurement risk claim, produced measurement models known as CVaR and CMVaR can be used as an alternative to measuring claim risk insurance. This would be used until the claim risk insurance can be measured in a more suitable manner taking into consideration the characteristics of data claim insurance as it happens.

Stages of Analysis
The stages in data analysis are methods or ways of processing data based on information collected until the data characteristics are easily understood and provide benefits to find the solution, especially for the research problem. Besides, data analysis can also refer to an activity that can change data results from research to become the information that can be used to derive a conclusion. Stages for data analysis in the claim life insurance measurement in this research is illustrated in the design as shown in Fig 3. EKSAKTA journal.uii.ac.id/eksakta February 2021, Volume 2, Issue 1, 24-32 Stages and process of data analysis are shown in Fig 3 and explained in the following summary.
(a) Data collection: Data needed in this research is data about claim life insurance within a one time period. Data claim insurance is the secondary data from Bank Negara Malaysia. This data claim insurance includes the number of claims and amount of value from insurance claims paid to the insured.

(b)
Grouping data: Data claim insurance obtained will be grouped into two components: number claims of data (frequency of claim), and amount claims of data (severity of claim). Each component of the claim data will initially be analysed based on descriptive statistics. The component data number of claims is a random variable discrete distribution. The component data from the amount of claims insurance is a random variable continuous distribution. Next, each component data will set the distribution model based on the following steps: model distribution assumption, parameter distribution model estimation, and the good of test distribution. (c) Model Assumptions Distribution: Model assumptions distribution on number and amount of claims are held by matching the distribution frequency histogram with the number of claims and amount of claims. The matching distribution frequency histogram will be done using software EasyFit 5.5. Based on each distribution, frequency histogram is designed and the probability model assumptions are set.
Next, the based model distribution assumptions parameter model assumptions will be done.
Model Distribution Estimation: Based on each model's distribution assumption that has been set, the next parameter model distribution estimation will be held for each component claim insurance.
Parameter model distribution estimation will be done using Maximum Likelihood Estimator (MLE). Parameter model distribution estimation will be done using software EasyFit 5.5. Next, the results from the parameter model distribution estimation must obtain a good test as discussed in the following section.
Next, based on the results of risk calculation using all three model risk measurements, a conclusion will be made.
The last stage in the claim data insurance analysis is making a conclusion by looking at the objectives of research that have been conducted.

Collective Value-at-Risk
Generally, the Collective Value-at-Risk is defined as

Conclusions
In this study, the formulation model of Collective Modified Value-at-Risk was studied, especially when a lot of big claims have Poisson and lognormal distribution. In the Modified Value-at-Risk, Cornish-Fisher expansion was intended to provide an adjustment factor to the estimated percentile of non-normal distribution. Model of the Collective Modified Value-at-Risk (CMVaR) development results turned out to be a function of the parameter of the Poisson distribution, and moments of m1, m2 and m3 of the lognormal distribution.

Suggestion and Recommendation
A study on simulation data have been done by us [8] shows that value obtained can be considered by actuaries to analysing collective risk in life insurance. In the future we will propose on studying on real data. Real data will be obtain by Bank Negara Malaysia which we propose also a comparison study between existing model with our propose model.