Constraints on the system, also included in the 1 Nx simulation model, are as follows. Standard devia- obtained at the k-th time step. The results of an MPF opera- and the results are shown in Fig. The algorithms 1 and 2 were thoroughly analyzed for The complexity was calculated using the most com- Ob1. Firstly, the theoretical complexity of both was cal- mon method called big O notation [48].
The method culated. Then, after implementation of the algorithms allows one to describe the asymptotic behavior of in the C programming language, the performance on a function or a set of functions. The conducted cal- four independent hardware platforms was tested. The culations have shown that both algorithms have the goal was to compare computational speed of algorithms same theoretical complexity, i. It means icated for multithreaded processors, tests were per- the algorithms cannot be compared directly.
A decision formed using 1, 2, 3, 5, 7 and 10 threads. The plat- was made to conduct several real-life tests, which gave form details are shown in Table 5. For each experi- a reliable answer. NET The results of the performed tests are shown in Figs. Framework 4. Computational time was measured 12 and Each value was calculated as an average of for the cases where the number of particles was 10 tests. As Platform D was much slower Results are not so impressive as in the case of modern than others, it was decided to use a secondary axis on hardware, but the algorithm is still up to 2.
It makes charts much easier to read. As one can Based on the simulation results, one can see that divi- see, the computational time for PF and MultiPDF 1T sion of the PF model into several parallel filters can PF for the same platforms is similar, and both of them improve the quality of estimation, but not for all cases.
With the increasing number For Ob1, increasing N f causes a deterioration of the of threads, one can observe a reduction in computa- estimation quality. For multivariable plant Ob2, divi- tional time. While working with more than 7 threads, sion into a few parallel parts improves estimation qual- computational time is not reduced, and even extended ity.
Ob3 is a specific plant, with a simple linear transi- in a few cases. The situation is directly related to proces- tion function and nonlinear measurement function five sors, especially with the numbers of cores and threads. For versions v1 - v3 , the best estima- Platforms B, C and D offer only 4 threads or cores.
If one is trying to filters see Figs. The most interesting results use more threads than physically or logically available, were obtained for versions v4 and v5 , because for the system wastes a lot of time changing active tasks. In that sense, a increase or decrease in this parameter causes deterio- calculation which was done within 7 days can now be ration of the estimation quality. One can infer that for done in just 1 day.
As one can see in Table 6, the 0. These results show the potential benefits with the possibility of the compensation of wrong val- and trigonometric functions the PF algorithm dividing ues. As one can see, almost always the result from one into a higher number of parallel filters works better.
For of the parallel filters is farther from the true value than almost every case, the increase of the number of parti- in the case of the MultiPDF PF result. This is due to the very and Ob5, show benefits from the proposed PF modifica- small distance between these results on the graph. For tion. As one can see in Figs. This may be This is probably caused by the strong nonlinearity of caused by the strong nonlinear character of these plants these objects.
To make results similar to each other for and their multidimensionality. For a higher than one - a different N f parameter, one must make calculations dimensional system, there are a lot of time steps when for a much larger number of particles.
In Fig. Using observed: as the number of particles increases, the qual- more than one sub-filter, there is always a chance for ity of estimation worsens. This may be caused by the error compensation by sub-filters resulting in the esti- distance between the subsequent noised measurements. In every time step a higher quality index. Therefore, of the values presented in the corresponding Fig.
Due in some cases, speeding up the calculations using MPF to the Kalman filtration, which was used instead of par- may result in a significant deterioration of the estima- ticles draw, this method does not provide good results tion quality. The MPF results in this case tion. The average Data Availability Data with saved simulation results are avail- simulation time measured for particles was 6.
Conflicts of interest The authors declare that they have no con- flict of interest Open Access This article is licensed under a Creative Com- 8 Conclusions mons Attribution 4. The main condition for improv- Creative Commons licence, unless indicated otherwise in a credit ing estimation quality is the presence of strong non- line to the material. To view Particle Filter and Appendix, additionally. For special object types, even a lot of parts make the quality index smaller.
In general, a trade-off must be found between the speed of Appendix computation and the quality of the estimation, because all real-world applied systems are specific and differ- Additional research on the effect of object properties ent. For the In most cases, an improvement in the estimation Ob4 object, which enables one to show improvement can be observed for smaller sums of particles, while of estimation quality by the MultiPDF PF, a number of for large sums there is no difference or it is negligi- modifications named Ob4X have been proposed.
The ble in the quality of the estimation. It is possible to operation of the algorithm checked for, among others, further improve the quality of the estimation as can separated state variables and various types of nonlin- be seen on the right-hand side of the graphs , but the earities in transition or measurement functions, or in additional computational effort has a relatively small both of them. The test results with the exact specifica- impact on the result. At the same time, it can be clearly tion of the objects have been included in the additional observed that improving the quality of estimation by file 1.
Moreover, too strong an influence In the vein of this method, it is also possible to fur- of the linear part may deteriorate the quality of the esti- ther parallelize the PF algorithm in every sub-filter , mation, which can be seen in the cases of object Ob44 because the bottlenecks are normalization and resam- and Ob The type of the transition function linear pling.
Normally, they must be carried out using infor- or nonlinear has no influence on the obtained results, mation about all particles. In the first case, the sum of and neither does the separation of the state variables. Kumar, H. In: Proceedings of the International Con- 1. Abur, A. Marcel Dekker Inc. Saha, S. Udupa, H. In: Proceedings of the 13th International technique for real time power system state estimation.
Conference on Information Fusion. IEEE, pp 1—8 J. Ribeiro, M. Zawirski, K. Poznan University Publisher, Poznan Ebinger, B. Zhang M. Complexity IEEE, pp. The aim of Ikoma [38] was to track a maneuvering target, e. The authors used a state-space representation to model this situation.
The dynamics of the target was represented by a system model in continuous time, although a dis- cretized system model was actually to be used in practice. The position of the target was measured by radar and the process described by a nonlinear observation model in polar coordinates. Con- sequently, the nonlinear non-Gaussian state-space model was used. Gustafs- son [39], developed a framework for positioning, navigation, and tracking problems using particle filters.
It consisted of a class of motion models and a general nonlin- ear measurement equation in position. A general algorithm was presented based on marginalization, enabling a KF to estimate all position derivatives. Based on simula- tions, the authors argued how the particle filter could be used for positioning based on cellular phone measurements, for integrated navigation in aircraft, and for target tracking in aircraft and cars.
Tracking an object can be defined as estimating the motion model parameters. The main approaches for tracking objects in video sequence in- volve transform domain-based methods [40], [41] and spatial-domain based modelling techniques [42], which include standard prediction techniques, such as Kalman and particle filtering [43], [44], and [45].
The inherent drawback in spatial segmentation is that it works only in the case of a homogenous background. Segmentation-based methods fail in general outdoors environments. To overcome these problems, Dohortey et al. They used a multi resolution-based wavelet transform to decompose an image into various sub-bands. Motion vectors were calculated from the coarsest level and refined using other sub-bands from the wavelet transform.
Wang et al. The object motion was parameterized by four parameters: positions, sizes, grayscale distributions, and presence of textures in objects.
Hence standard tracking algorithms like Kalman filtering do not give good results due to the number of approximations involved. An important contribution to this area was done by Isard and Blake with the introduction of the condensation [46] algorithm. Later Doucet et al. They used a translation-based motion model for rectangular or elliptical objects. In this method, the color priors for each object were computed using weighted kernel density estimation.
The weights were obtained based on the distance of the pixels from the object centroid. The mean-shift vector was computed recursively by maximizing the likelihood ratio between the object color prior and the model generated from the hypothesized object position.
The parametric B-spline curves were tracked over image sequences [44]. The B-spline curves were parameter- ized by their control points.
For stability in tracking, the degree of freedom associated with the control points was reduced by applying an affine transform. The concept of the condensation algorithm is factored sampling of the posterior density function. It is known that the posterior can be represented as a product of the prior and the likelihood. Generally, the likelihood function is multi modal, and thus, the posterior cannot be calculated directly from the likelihood and the prior. The Condensation algorithm is particularly useful when the conditional observation density or the like- lihood can be evaluated point wise and its sampling is not possible, it is possible to sample the prior but evaluation of the prior is possible.
The models are assumed to follow a Markov-1 process. The algorithm for tracking is given by Isard and Blake [44]. Zhou et al.
An adaptive appearance model based on Jepson et al. The appearance model based on intensity was used instead of the model based on phase of the intensity.
From the posterior density obtained, the EM algorithm was used to update the appearance model adaptively. The number of particles selected was also adaptive. Prediction error is a measure of prediction quality. If the prediction error is high, the noise is spread widely, forcing the model to cover large jumps in motion state. Thus, more particles are selected if the noise variance is more, and vice-versa.
The standard multi dimensional normal density was used as the likelihood function. Occlusion detection was implemented by keeping track of the pixels obtained from the appearance model. The model update was stopped if occlusion was detected. This method was used to track a disappearing car, an arbitrarily moving tank, and a face moving under occlusion.
The main idea behind this approach was that tracking could be defined as pixel translations. Thus, the tracking algorithm did not depend on the rigid nature of the object being tracked, which is a necessary condition in approaches described in the previous sections. The authors used level-set functions as defined by Caselles et al. Yilmaz et al. The authors generated the semi-parametric model using an independent opinion polling strategy, as described by et al. From the related equations, the posterior probability of membership was computed.
The main results of the work done by Yilmaz et al. Yu et al. The main limitations of the spectral domain analysis were lack of proper resolution and block- ing effects of the discrete Fourier transform. The spatial-domain based techniques overcame all the above limitations with fewer number of frames required in sequence. A few concepts in estimating error bounds are presented by Havinga and Smit [27].
This research work is built around these ideas. The classical solution to the state-space estimation problem is given by the Kalman filter [1], [4] in which the state model and the measurement or the observation model are assumed to be linear, and the noise is assumed to be Gaussian.
If the model is nonlinear, then the model is approximated to a linear model so that Kalman filter can be applied [22]. The extensions of the Kalman filter include an extended Kalman filter [3], [23] and an unscented Kalman filter [23], [24], in which the nonlinear term is approximated up to first- and second-order terms, respectively.
In the Kalman filtering approach, the underlying density functions are assumed to be Gaussian, hence, estimation of the mean and variance characterizes the complete density function. Real models for an application are generally nonlinear and non-Gaussian. In order to handle these cases, the complete density function is considered by its samples. Let a system be defined by parameters represented by st and the observations be represented by zt. The aim of prediction is to recursively estimate the state parameters st from the observations given in equation 3.
The posterior density function over the state parameters, given by p st zt , gives the measure of the state parameters from the observations. This posterior density function is predicted using the Bayesian framework. It is assumed that the system follows a first-order Markov process. The last term in equation 3. The middle term in equation 3. The first term in equation 3.
The denominator can be ignored since it is a normalizing constant. Replacing wtm in equation 1. The posterior function is also called the optimal sampling function [14], [22] by the research community. It is used widely in equalization [12], [11], multiuser detection [52], channel coding [5], etc.
Particle filtering has also been applied to a Doppler-based target-tracking [53] problem. In such applications, a radar is mounted on a sensor, sends a signal to the target, and estimates the position and velocity of the target via observation of time delay and Doppler shift from the reflected signal.
A method to detect the target is presented in this chapter. The observation model is assumed to be nonlinear. Suppose that the target is moving with a velocity v as shown in Figure 4. The vrad t component is the nonlinear term in equations 4. Particle filtering provides a solution without requiring any such approximations. Hence, it is assumed to be Gaussian.
The observation model, which is nonlinear, consists of two observations, i. The joint conditional density p zt xm t is Gaussian because n t is Gaussian. In order to compute the weights, the prior and likelihood have to be computed as explained in the previous sections. To begin, a density function is selected with an arbitrary mean and variance, the pdf is sampled to get a set of particles, they are used in the prior i. Then, the likelihood is computed using equation 4.
This is repeated recursively at each time step. The basic problem in the standard particle filtering technique is the degeneracy of particles [11], [22]. On applying particle filtering over a sufficient number of time-steps, most particles are assigned negligible weights, and only a few particles with proper weights survive. This effect is known as degeneracy. As a result, the posterior distribution becomes highly skewed, and hence, subsequent sampling from this density would only deteriorate the performance of the filter.
A simple technique to overcome degeneracy is to replace the weights of all the sampled particles by a constant, changing the posterior to a uniform density. As a result, the particles with negligible weights are boosted, and the large weights are reduced.
This technique retains the particles but changes the posterior density, which is not desirable. Other solutions suggested include choosing a proper important density function and appropriate resampling techniques [22]. The following resampling technique has been used in experiments in this study to improve the performance of the particle-filtering technique by varying the particle density, time-steps and particle location.
A small M b eff indicates severe degeneracy. To overcome the degeneracy without changing the posterior density, kernel-based smoothing [14] techniques are applied. The idea behind this technique is that the highly skewed discrete posterior density is first smoothed using a kernel and transformed into a continuous density function.
The continuous density function is sampled to obtain the discrete posterior density function. The Epanechnikov kernel is the optimal kernel in terms of mean square error between the original distribution and the corre- sponding kernel-based smoothed distribution [14].
On the other hand, the Gaussian kernel is computationally simple and gives sufficiently good results. For this reason, the Gaussian kernel has been used in this resampling technique. For each particle, a weighted Gaussian kernel with unit variance and mean equal to the particle location is generated. The weight of the Gaussian density is equal to the weight associated with the selected particle.
The weighted mixture Gaussian density functions are added to obtain a smoothed and continuous density function, which is an approximation of the discrete posterior density function. This continuous density function is next discretized and fed into the particle filter. The main problem that arises due to the resampling step is that the diversity of particles is reduced.
This can be overcome by using Markov chain Monte Carlo techniques [57]. An object is defined by the features it exhibits. To detect the motion of an object, the movement of the features must be tracked. After identifying the motion of the features, a prediction model must be formed such that the position of the features can be predicted in the next frame.
This thesis considers edges for tracking since edges are the most prominent features of an object. The edge in the block is shown in Figure 5. The moving edge or the motion edge can be completely characterized by these parameters. It is assumed that the object, and hence the edge, moves with the foreground velocity. The foreground and background velocities help in modelling the occlusion and disocclusion effects due to the movement of the object.
In order to predict and track the parameters, a motion model must be defined, followed by a prediction algorithm both of which are defined in the subsequent sections.
In this dissertation, a square neighborhood of seventeen pixels in each dimension is used throughout the model. Figure 5. Low-intensity crossed lines at the center of the block representing the axes used as reference.
Thick line representing the edge. This is justified due to the fact that if the new pixel location x0 was chosen correctly, then the intensity values must be equal, i.
This difference can be modelled as a normal distribution, as given in equation 5. It is to be noted that n defines a unit vector normal to the motion edge. The parameter w in equation 5. It also gives the width of the occluded or the disoccluded region due to the movement of the edge. The temporal motion of the edge describes the motion of the edge with respect to time, i.
It is assumed that motion parameters follow a first-order Markov process. The edge is assumed to move with the foreground velocity. Hence, the distance of the edge from the center of the block for the next time step is obtained by moving the edge with the normal component of the foreground velocity.
Gaussian noise is added to account for modelling errors implicit in this model. This problem can be identified as a state-space estimation problem, which includes a set of state evolution equations, as given in equation 5.
Since this problem involves nonlinear models, the particle-filtering technique described in Chapter 3 can be applied directly. The observation equation is given by equation 5. The main equation in particle filtering is given by 3. The parameter T can be considered a smoothing constant that smoothes the posterior function. A better search in the parameter space can be achieved by smoothing the posterior density function. At the first time instant, i. To form h h the initialization prior, the mean values of the state parameters are required.
These mean values are computed using low-level detectors, which is described in the next section. They are obtained directly from the individual frames of the video sequence. The following sections describe the methods used to calculate the state parameters.
In these computations, a square neighborhood of length seventeen pixels is selected. It is assumed that the spatial edge captured within a neighborhood is approximately a straight line. The motion edge can be observed from the frame difference of two consecutive frames.
The difference image formed by subtracting two blocks would give the edge formed due to motion or the motion edge formed by the moving edge. The first step in line-fitting is the selection of the coordinates to fit the line. Since line-fitting is implemented for the motion edge, the coordinates that form the motion edge in the selected neighborhood must be collected.
This is implemented by setting a threshold to the frame difference and selecting only those coordinates whose difference in frame intensities is greater than the threshold. A simple two-parameter line-fitting algorithm is used to fit a straight line into the selected pixels. The line- fitting algorithm [59] is described as follows: Let xi and yi be the x and y coordinates of the moving pixels.
The velocity of the edge is determined by using spatio-temporal filters. Four spatio-tempoal filters based on gradient operations [60], [61] are applied to the motion plane to detect its orientation. From the cube hence formed, the motion plane is extracted. The motion edge is perpendicular to the orientation of the spatial edge. For example, for a vertical edge, the motion plane is the horizontal plane.
The ratio of E0 and E90 gives the ratio of the gradient along the displacement axis and the time axis. This gradient represents the velocity of the edge. The direction of velocity is obtained by comparing the magnitudes of E45 and E If E45 is greater than E , then the object is moving away from the left-bottom edge of the frame toward the right-bottom edge of the frame.
If E is greater than E45 , then the object is moving towards the left-bottom edge of the frame from the right-bottom edge of the frame. By selecting a neighborhood that contains the object of interest, the foreground velocity uf is calculated, and by choosing a neighborhood away from the object, the background velocity is computed.
Thus, the model parameters are computed from images that are the observations. The next section describes the application of the particle-filtering algorithm to the tracking problem. The initialization prior is formed by using equation 5. The likelihood of particles is calculated using equation 5. Now, the particle-filtering equation 5.
The posterior is marginalized to obtain the mean values of the parameters of interest. From the two state parameters, the edge is constructed by forming a straight line in the updated neighborhood. The theoretical classical PF. The CP method is also known to be simple particles and the likelihood box. When there is no consistency and, most importantly, to be independent of nonlinearities. There are different methods for The system dynamics and the sensor equations have the performing the contraction step [26].
The resampling step is following general form: used to introduce variety. The lower K is the maximum number of time steps and nx is the dimen- bar is the minimum value of a quantity and the upper bar is sion of [xk ].
In this paper, the interval vector [xk ] consists of the object kinematic interval operator [. For that pur- number of parameters to be estimated. The system kinematic pose, elementary arithmetic operations, e. The sensors are considered to collect point measurements In addition, a lot of research has been performed with the according to equation 4. Nevertheless, to express the mea- so called inclusion functions [26].
The interval measurements box [x] is a box [f ] [x] containing f [x]. Here nz denotes the the interval enclosing the real image set and, then to decrease dimension of the measurement vector.
Several methods for building factor. Model of the Extended Targets elimination, the Gauss-Seidel algorithm, linear programming, etc. Each of these methods may be more suitable to some In a two-dimensional case, where the state vector consists types of CSP.
One way to describe these modelled by the nearly constant velocity model [27], [28]. The C. Observation Model likelihood that each of the j th measurement is related to the A scenario with a network of multiple low cost sensors pth box particle is given with the relation: positioned along the trajectory of movement is considered.
The sensor s , with coordinates p and parameters [xk ], on the sensor state e. The pdf p [V k ] [xk ] represents the probability The collected measurements consist of range and bearing. The pdf ulated measurements data.
The number of measurements M p obtained at each time step from an active sensor consists of p [z jk ] [V k ] gives the probability of a measurement [z jk ] to p N measurements originating from the target and C clutter belong to the measurement source interval [V k ], relevant for measurements, i.
The measurement equation the box particle p. Step B. Step scenario. In order to predicted x and y interval position coordinates and interval facilitate the solution of the considered problem, instead of radius of the target. IV takes into account the visibility area of the sensor for each of the box Visualisation of the indicator functions is plotted with black p p line in the example in Fig.
V is devoted to introducing variety in the box particles by resampling. Further we iterate the likelihood calculation II. Contraction I with Respect to the Sensor and the resampling step several times in order to perform the contraction. Finally, in Step B. This area Step A. State Prediction for one of the particles is plotted with red line in Fig.
Only the portion of the measurements that falls within p p p the interval [dk ] is considered in the further steps. An inclusion function is applied as part of the prediction step. The contraction is realised using an intersection. As it is discussed in [23] the ratio of the area covered by the intervals after and before the contraction gives a term for updating the weights: Step B. An example for this transformation can be seen in Fig.
It VI. The estimate of the target state. The visual the middle of the estimated box shape. The on Fig. Resampling with 10 or more box particles. For the area covered by all of the particles, that as straight forward as in the classical PF algorithms.
As Fig. As an example, with equal probability one of estimation. Fig 2 f into several equi-sized subintervals.
0コメント