scispace - formally typeset
Search or ask a question

Showing papers in "Signal Processing-image Communication in 1994"


Journal ArticleDOI
TL;DR: An extension of the 3-D Recursive Search Block-Matching algorithm is presented that provides a sub-pixel accuracy of the estimated motion vectors, which significantly broadens the applicability of the algorithm in the area of interlaced-to-sequential scan conversion and coding.
Abstract: Recently the 3-D Recursive Search Block-Matching algorithm was introduced as a high quality, low-cost, true-motion estimation method suitable for critical field rate conversion applications. In this article an extension of the algorithm is presented that provides a sub-pixel accuracy of the estimated motion vectors. This significantly broadens the applicability of the algorithm in the area of interlaced-to-sequential scan conversion and coding. The extension is such that it hardly adds any calculational complexity, which implies that the attractiveness of the algorithm for a VLSI implementation remains high. Even more, a simplified version of the algorithm, the Y-prediction block-matcher, is suggested that offers sub-pixel accuracy, a large range of motion vectors, and an extremely low complexity requiring only four candidate vectors per block. An evaluation of this estimator is included in the paper.

140 citations


Journal ArticleDOI
TL;DR: Multiresolution block matching methods for both monocular and stereoscopic image sequence coding are evaluated to drastically reduce the amount of processing needed for block correspondence without seriously affecting the quality of the reconstructed images.
Abstract: Multiresolution block matching methods for both monocular and stereoscopic image sequence coding are evaluated. These methods are seen to drastically reduce the amount of processing needed for block correspondence without seriously affecting the quality of the reconstructed images. The evaluation criteria are the prediction error and the speed of the algorithm for motion, disparity, and fused motion and disparity estimation, in comparison with the full search (exhaustive) method. A new method is also proposed based in multiresolution techniques, for efficient coding of the disparity or the displacement vector field.

84 citations


Journal ArticleDOI
TL;DR: Object-based analysis-synthesis coding (OBASC) using the source model of ‘moving rigid 3D objects’ for the encoding of moving images at very low data rates gives almost identical image quality for the same data rate.
Abstract: The topic of investigation was object-based analysis-synthesis coding (OBASC) using the source model of ‘moving rigid 3D objects’ for the encoding of moving images at very low data rates. According to the coding concept, each moving object of an image is described and encoded by three parameter sets defining its motion, shape and surface color. The parameter sets of each object are obtained by image analysis. They are coded using an object-dependent parameter coding. Using the coded parameter sets, an image can be synthesized by model-based image synthesis. In comparison to block-based hybrid coding, OBASC requires the additional transmission of shape parameters. The transmission of shape information avoids the mosquito and block artifacts of a block-based coder. Furthermore, important areas such as facial areas can be reconstructed with a significant image quality improvement. OBASC based on the source models of ‘moving flexible 2D objects’ and of ‘moving rigid 3D objects’ gives almost identical image quality for the same data rate. Therefore the use of more advanced source models like flexible 3D objects or 3D face models is expected to further improve image quality.

69 citations


Journal ArticleDOI
TL;DR: Advantages and disadvantages of motion-adaptive standards conversion, as compared to fixed vertical-temporal filtering and motion compensation techniques, are discussed.
Abstract: The principle of motion-adaptive standards conversion is explained, and the requirements for motion detectors, used in such applications, are stated. A detailed description of the functional blocks of the motion detector is then given. Advantages and disadvantages of motion-adaptive standards conversion, as compared to fixed vertical-temporal filtering and motion compensation techniques, are discussed.

46 citations


Journal ArticleDOI
TL;DR: An improved block matching algorithm based on a split and merge procedure is used to estimate the motion and correctly propagate the motion vectors from blocks with reliable motion to blocks with uncertain motion.
Abstract: This paper presents a new system for the conversion of interlaced formats to progressive ones. Like other proposals, it is motion compensation-based, but a substantial 2-fold improvement is added. First, assuming translational motions, the problem in the vertical direction is studied as a generalized interpolation problem. As a result, we derive two sets of linear filters which take into account the aliasing existing inside the fields. The first set of filters allows one to efficiently perform the interpolation which is required for subpel motion estimation. The second set of filters improves the estimation of the lines needed to obtain the progressive format, namely the deinterlacing process itself. Second, an improved block matching algorithm based on a split and merge procedure is used to estimate the motion and correctly propagate the motion vectors from blocks with reliable motion to blocks with uncertain motion. In order to tackle the problem of covered/uncovered objects, the whole process of estimation and motion-compensated interpolation is applied forward and backward. Simulation results and objective measurements are provided for artificially moving interlaced sequences obtained from a fixed picture and for a progressive sequence first converted to interlaced. The global system has also been tested on other sequences and visually assessed.

44 citations


Journal ArticleDOI
TL;DR: Fractal techniques have been used by a number of authors to code monochrome images, and it is shown that one can considerably weaken the requirement on contractivity leading to a coding scheme which preserves high frequencies much better than that proposed by Jacquin.
Abstract: Fractal techniques have been used by a number of authors (Barnsley and Sloan, 1988l; Jacquin, 1989) to code monochrome images. In this paper we investigate a version of the technique proposed by Jacquin. A number of extensions are made to his coding scheme, and in particular it is shown that one can considerably weaken the requirement on contractivity leading to a coding scheme which preserves high frequencies much better than that proposed by Jacquin.

36 citations


Journal ArticleDOI
TL;DR: An adaptive scheme employing pyramid structure is proposed for multiresolution encoding of still pictures by designing a low-entropy pyramid decomposition by means of different reduction/expansion filters and giving encoding priority to important features through a content-driven decision rule.
Abstract: An adaptive scheme employing pyramid structure is proposed for multiresolution encoding of still pictures. Efficiency is increased by designing a low-entropy pyramid decomposition by means of different reduction/expansion filters, and also by giving encoding priority to important features through a content-driven decision rule. Quantization error feed-back performed along the pyramid levels ensures lossless reconstruction capability and improves the robustness of the algorithm.

35 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of motion compensation in a multiresolution environment, considering both QMF-SBC and wavelet transform approaches, and different motion compensation schemes are derived and their efficiency is considered with regard to scalability and to the lengths of subband analysis and synthesis filters.
Abstract: Multiresolution techniques have become more and more appealing in current image coding. Image multispectral representation produces many important features such as spectral shaping of coding noise according to human eye perception, good image energy compaction, coder tuning with respect to any band characteristics, and allows for multilevel layered transmission that is one of the main targets pursued by the broadcaster. Despite these appealing capabilities, multiresolution techniques have failed to give the expected results. One of the reasons for this failure is the difficulty of exploiting the temporal redudancy present in image sequences. This paper addresses the problem of motion compensation in a multiresolution environment, considering both QMF-SBC and wavelet transform approaches. Different motion compensation schemes are derived and their efficiency is considered with regard to scalability and to the lengths of subband analysis and synthesis filters. Simulation results are used to support relevant conclusions where needed.

24 citations


Journal ArticleDOI
TL;DR: The intra-block filtering techniques are revised to enlighten the limitations implied by small block dimensions and hybrid techniques, using variable length FIR filters after the discard of low order DCT coefficients, are introduced to increase the computational efficiency.
Abstract: The extensive use of discrete cosine transform (DCT) techniques in image coding suggests the investigation on filtering and downsampling methods directly acting on the DCT domain. As DCT image transforms usually operate on blocks, it is useful that the DCT filtering techniques preserve the block dimension. In this context the present paper first revises the intra-block filtering techniques to enlighten the limitations implied by small block dimensions. To overcome the artefacts introduced by this method and to satisfy the filtering design constraints which are usually defined in the Fourier domain, inter-block techniques are developed starting from the implementation of FIR filtering. Inter-block schemes do not exhibit any limitation but their computational cost has to be taken into account. In addition, hybrid techniques, using variable length FIR filters after the discard of low order DCT coefficients, are introduced to increase the computational efficiency; in this case, the introduced aliasing has to be kept at tolerable values. The amount of the tolerable aliasing strictly depends on the subsequent operations applied to the filtered and downsampled image. The numerical examples reported could form a basis for error estimation and evaluation of trade-off between performance and computational complexity.

22 citations


Journal ArticleDOI
Kw Chun1, Jong Beom Ra1
TL;DR: This paper describes a new block matching algorithm based on successive refinement of motion vector candidates that diminishes the local minimum problem in the existing fast search algorithms and also reduces the computation time drastically compared with the full search algorithm.
Abstract: This paper describes a new block matching algorithm based on successive refinement of motion vector candidates. The proposed algorithm starts with a full search where an approximated matching criterion is used to obtain a set of candidates for motion vector. In each successive searching process, the matching criterion becomes refined and the search is performed only for the candidate set obtained at the preceding layer to refine the candidates. By repeating this process, at the last layer, only a single motion vector can be selected from a few candidates using the conventional mean absolute difference criterion without approximation. Since a full search is performed at the first layer, even though a coarse matching critirion is used, the proposed algorithm diminishes the local minimum problem in the existing fast search algorithms and also reduces the computation time drastically compared with the full search algorithm.

22 citations


Journal ArticleDOI
TL;DR: An algorithm which determines the page orientation prior to skew detection is presented, which is shown to achieve a high skew detection accuracy for mixed mode document formats which contain typewritten text, cursive script, line-art and photographic pictures.
Abstract: In the field of document image analysis, accurate detection and removal of intrinsic skew is of paramount importance as a first step in the processing of document images. Here we present an efficient scheme for detecting the degree of misalignment in a document page. The proposed algorithm operates directly on the raw digitised image and is shown to achieve a high skew detection accuracy for mixed mode document formats which contain typewritten text, cursive script, line-art and photographic pictures. We also discuss efficiency considerations for a practical real-time hardware implementation of the algorithm. Furthermore, in a practical document image processing environment, it is necessary to process documents that are landscape or portrait oriented. In this context we present an algorithm which determines the page orientation prior to skew detection.

Journal ArticleDOI
TL;DR: A new approach to very low bit-rate interpersonal visual communication based on a suitable scene model, i.e. a flexible structure adapted to the specific characteristics of the speaker's face, which is very promising for applications both in videophone coding and in picture animation.
Abstract: This paper describes a new approach to very low bit-rate interpersonal visual communication based on a suitable scene model, i.e. a flexible structure adapted to the specific characteristics of the speaker's face. The face model is dynamically adapted to time-varying facial expressions by means of few parameters, estimated from the analysis of the real image sequence, which are used to apply knowledge-based deformation rules on a simplified muscle structure. Facial muscles are distributed in correspondence to the primary facial features and can be activated through the direct stimulation of each individual fiber or, indirectly, by interaction with adjacent stimulated fibers. The analysis algorithms performed at the transmitter to estimate the model parameters are based on feature-oriented operators aimed at segmenting the real incoming frames and at the extraction of the primary facial descriptors. The analysis/synthesis algorithms have been developed on a Silicon Graphics workstation and have been tested on various ‘head-and-shoulder’ sequences: the obtained results are very promising for applications both in videophone coding and in picture animation, where the facial expressions of a synthetic actor is reproduced according to the parameters extracted from a real speaking face.

Journal ArticleDOI
TL;DR: This paper compares the different versions of the block truncation coding image compression algorithm from the viewpoint of reconstructed image quality and computational complexity.
Abstract: Many different versions of the block truncation coding image compression algorithm exist. In this paper we compare the different versions from the viewpoint of reconstructed image quality and computational complexity.

Journal ArticleDOI
TL;DR: It is shown that the support of this time-varying frequency response matches that of the video signal STS, and thus filtering along the accelerated motion trajectory potentially yields the smallest amount of aliasing in standards conversion of video with accelerated motion.
Abstract: We address the standards conversion of digital video containing accelerated motion. The spectral characterization of video containing accelerated motion is performed by means of short time spectral analysis, where the support of the short time spectrum (STS) of video with accelerated motion is derived. Linear time-varying filters that operate along an accelerated motion trajectory are then analyzed, and the time-varying frequency response of these filters is computed. Finally, we show that the support of this time-varying frequency response matches that of the video signal STS, and thus filtering along the accelerated motion trajectory potentially yields the smallest amount of aliasing in standards conversion of video with accelerated motion. Experimental results comparing the performance of motion-compensated filtering along the accelerated motion trajectory with filtering along the constant velocity motion trajectory that is tangent to the accelerated trajectory at the processed frame are provided.

Journal ArticleDOI
TL;DR: An improved Hybrid DPCM image sequence coder is presented, which includes a block classification unit, which detects stationary textured background blocks and allows copying such blocks instead of replenishing them, thus reducing the bit-rate of the coder.
Abstract: In Hybrid DPCM image sequence coders, every block with high variance in the corresponding difference block is replenished even if it belongs to a stationary background region, as could be the case for coarse textured blocks. We present an improved Hybrid DPCM image sequence coder, which includes a block classification unit. This unit detects stationary textured background blocks and allows copying such blocks instead of replenishing them, thus reducing the bit-rate of the coder. The unit consists of three parts. The first part is a statistical-based change detector. Segmentation of each image into textured and smooth regions, a fast converging deterministic relaxation procedure and a multi-resolution approach are used to obtain the final moving/stationary segmentation. In the second part, a Gaussian AR model-based texture matching test is proposed. The third part detects edges in stationary blocks. To avoid edge related artifacts, blocks that contain an edge are unconditionally coded. The coder which incorporates the proposed change detector was found, in simulations, to provide a substantial reduction in bit-rate, while maintaining the quality of the reconstructed sequences, in coding image sequences which contain large areas of coarse textured background.

Journal ArticleDOI
Pasi Fränti1
TL;DR: A composite modelling method is presented that reduces the size of the data to be coded by arithmetic coding and applies arithmetic coding to the areas with more variation to achieve a high compression ratio.
Abstract: The use of arithmetic coding for binary image compression achieves a high compression ratio while the running time remains rather slow. A composite modelling method presented in this paper reduces the size of the data to be coded by arithmetic coding. The method is to code the uniform areas with less computation and apply arithmetic coding to the areas with more variation.


Journal ArticleDOI
TL;DR: The principal results show that lip information alone is not sufficient for speech segmentation, however, lip information may assist an audiospeech segmentation system if the speech signals are corrupted by noise.
Abstract: This paper describes the application of image processing techniques in extracting the lip kinematics parameters (velocity and displacement) from image sequences. The centres of the lips are located by morphological image processing and cluster analysis. The motion of the lips is determined by a block matching algorithm. The paper presents a modified block matching algorithm which solves the problems caused by uniform shading and texture. The paper also describes a method which transforms the motion vectors into lip velocities and displacements. Moreover, the correlation between the lip information and the speech signals is demonstrated. Finally, the paper explains how the lip-tracking system can be applied to speech segmentation. The principal results show that lip information alone is not sufficient for speech segmentation. However, lip information may assist an audio speech segmentation system if the speech signals are corrupted by noise.

Journal ArticleDOI
TL;DR: The optimum algorithm is adaptive and combines concepts originating from adaptive filtering theory and operations research and is based on three functions: a non-stationary source predictor to estimate a coming horizon of future bit-rates, a cost function to be minimized and an algorithmic search of the optimum policy aiming at minimizing the previous cost function.
Abstract: This paper intends to present an optimum control algorithm for digital television and high-definition television codecs. Transmissions at either constant or variable bit-rates will be taken into consideration with the purpose of transmitting on ATM networks. Two main goals are expected to be achieved in a codec which controls its output bit-rates. The first deals with realizing graceful commands and degradations of image quality and the second with achieving an optimum use of the buffer in order to maximize the image quality and avoid any buffer overflow. Both actions on buffer content and image quality will turn out to be tightly related in the optimum algorithm. The image control is mainly applied during the non-stationary periods of the incoming information source, i.e. especially during the scene changes to produce smooth variations of image quality. The control of buffer level is performed during the stationary or predictable periods of the source; i.e. within the scenes. The optimum algorithm is adaptive and combines concepts originating from adaptive filtering theory and operations research and is based on three functions: a non-stationary source predictor to estimate a coming horizon of future bit-rates, a cost function to be minimized and an algorithmic search of the optimum policy aiming at minimizing the previous cost function.

Journal ArticleDOI
TL;DR: A new fully adaptive discrete cosine transform (DCT) based color image sequence coding system based on an adaptive Laplacian quantizer and a two-sided Huffman entropy coder is presented to get the optimal coding result.
Abstract: A new fully adaptive discrete cosine transform (DCT) based color image sequence coding system is presented. A variable size block matching motion-compensation scheme is first designed. It differs from the conventional block matching motion compensation by varying the block size to make a better trade-off between the required bit rate and picture quality. Connected to the compensation scheme is the DCT with block size varied adaptively to the working block size derived from the motion compensator. Different gating functions are defined on the DCT to select the most efficient DCT coefficients in order to reach the best performance. An adaptive Laplacian quantizer and a two-sided Huffman entropy coder are finally utilized to get the optimal coding result.

Journal ArticleDOI
TL;DR: A new equi-spaced 3-level algorithm which is fast and has nearly optimum mean square error is described.
Abstract: For a high-quality real-time image compression at moderate bit-rates the equi-spaced 3-level block truncation coding algorithms is an attractive coding method. Unfortunately, the present-day equi-spaced 3-level algorithms are not optimum (in the mean square error sense). In this paper we describe a new equi-spaced 3-level algorithm which is fast and has nearly optimum mean square error.

Journal ArticleDOI
TL;DR: The proposed dynamic finite state vector quantization technique can dynamically adapt to the statistics of the input image based on the previous encoding experience and can achieve higher coding efficiency.
Abstract: A dynamic finite state vector quantization technique for colour image compression is proposed. The method efficiently exploits both the statistical redundancy between the colour components of a pixel and the high correlations between adjacent pixels. The image is compressed losslessly as compared with colour quantized index image. Experimental results show that it can significantly reduce the storage requirement while maintaining excellent image quality. An improved technique which incorporates learning automata is devised. The proposed method can dynamically adapt to the statistics of the input image based on the previous encoding experience. The improved technique can achieve higher coding efficiency. It is also compared with different adaptive lossless compression methods and the results are encouraging.

Journal ArticleDOI
TL;DR: The software-based moving picture coding system can compress moving pictures (video) under the real-time constraints of video applications, such as videophone/videoconference, without using any expensive compression chips.
Abstract: A novel moving picture coding system, called the software-based moving picture coding system, is presented in this paper. In this coding system, two new techniques, modified block truncation codes and multiresolution-in-time sampling, are used for real-time encoding and decoding of the moving pictures. The modified block truncation codes can process image data with three times faster and two times higher compression ratio than those of the traditional ones. The multiresolution-in-time sampler samples the image blocks with variant sampling rates, which depend on the activities of image blocks. This technique can make the coding process faster and the compression ratio higher. By using these techniques, the software-based moving picture coding system can compress moving pictures (video) under the real-time constraints of video applications, such as videophone/videoconference, without using any expensive compression chips.

Journal ArticleDOI
TL;DR: As the lower frequency limit of temporal sampling is evaluated, two principal methods of upward-conversion, based on kinematical information, are compared with respect to the amount of motion-information precision which are needed, from the subjective point of view, for maintaining a high-quality in image portrayal.
Abstract: Based on the results of various studies of the conception of motion through the human vision, this report introduces a model, the essence of which is the visually adapted subsampling and the presentation of pictures in motion. Besides the normal data on the picture content, the new motion portrayal includes additional information with which the kinematically correct generation of sharp intermediate images is accomplished instantaneously. As a consequence, arbitrary standards of temporal conversion become feasible. Furthermore, within the scope of subjects related to temporal sampling of moving images, this study focuses on the important question: What is the lowest rate at which additional information has to be provided so that the picture interpolation is subjectively judged as being still of highest quality? As the lower frequency limit of temporal sampling is evaluated, two principal methods of upward-conversion, based on kinematical information, are compared with respect to the amount of motion-information precision which are needed, from the subjective point of view, for maintaining a high-quality in image portrayal.

Journal ArticleDOI
TL;DR: A low complexity, very good visual quality video subband coder fully compatible with ATM networks that exploits the characteristics of subband decomposed image through a ‘multiresolution’ quantization methodology that enables the coder to be layered and variable bit-rate.
Abstract: This paper discusses a low complexity, very good visual quality video subband coder fully compatible with ATM networks. The coder is designed to compress an incoming video signal bit-rate by a factor of 9–13 and it is applicable to ‘ Head & Shoulders ’ as well as ‘ Full Motion ’ sequences. Such results being obtainable in a 17 Mips calculation complexity environment allow the coder implementation on a conventional digital signal processor. The system exploits the characteristics of subband decomposed image through a ‘multiresolution’ quantization methodology. This new resource allocation technique enables the coder to be layered and variable bit-rate. These two properties are essential for optimal use of the coder on a fast packet switching network.

Journal ArticleDOI
Tero Koivunen1, Jouni Salonen1
TL;DR: A new motion estimation algorithm that can be used in motion compensated standards conversions and a scanning rate conversion method for motion compensated field rate upconversion from an interlaced 50 Hz to progressive 100 Hz format are proposed.
Abstract: This paper describes a new motion estimation algorithm that can be used in motion compensated standards conversions. As an example, we propose a scanning rate conversion method for motion compensated field rate upconversion (FRU) from an interlaced 50 Hz to progressive 100 Hz format. Motion estimation is done on the block matching basis using a bit level classification approach (BLC). Motion vector selection is controlled by both the quantized rough shape and the edge information of the image. Non-linear vector postprocessing is used to obtain a consistent and uniform vector field. Block segmentation is used to improve the accuracy of the vector field.

Journal ArticleDOI
TL;DR: It is proved analytically that there always exist nonnegative source splitting gains which are closely related to reduction of the average codelength of the variable length code, which is also approved by experiments with ( r, l ) symbols.
Abstract: This paper presents a new approach for efficient variable length coding which employs the source splitting and component encoding. In order to improve the performance of the Huffman encoding of ( r , l ) symbols which are obtained from the zigzag scanning of the quantized DCT coefficients, a simple method is introduced for splitting a source into two sources of smaller entropies with no overhead information to be transmitted for reconstructing the outcomes of the original source. It is proved analytically that there always exist nonnegative source splitting gains which are closely related to reduction of the average codelength of the variable length code, which is also approved by experiments with ( r , l ) symbols.

Journal ArticleDOI
TL;DR: A method for a complete color correction in electronic slide scanning based on the color signals delivered from a conventional RGB slide scanner, using a known color stimulus to determine the spectral color stimulus via the color pigment concentrations in every pixel.
Abstract: In spite of the remarkable progress in electronic imaging the conventional color film is still one of the best media for the primary shooting of images at high spatial resolution. Therefore, it is an important source of high graded images for electronic image processing. This paper presents a method for a complete color correction in electronic slide scanning. The method is based on the color signals delivered from a conventional RGB slide scanner. These signals, however, do not represent exact color values as the scanner is not capable of perceiving color as is the human eye. Contrary to other methods, in the following approach the primary signals are used to determine the spectral color stimulus via the color pigment concentrations in every pixel. Using a known color stimulus allows the calculation of correct RGB color values. The sufficiently accurate numerical modeling of the scanning process is essential for this method. As a considerable long calculation time is needed if the algorithm is implemented on workstations presently available, multidimensional tables used in combination with linear interpolations permit the time for correction to be reduced significantly.

Journal ArticleDOI
Joo-Hee Moon1, Jae-Kyoon Kim1
TL;DR: It is shown that considering both the estimation accuracy and convergence, and the complexity of estimation procedure, 2-D motion model with either 6 or 4 description parameters can be a useful choice for motion description of planar objects nearly parallel to image plane and having translative and rotative motion.
Abstract: Two-dimensional (2-D) motion models make on image plane approximate motion descriptions of true motion made by 3-D motion model. This paper is concerned with several 2-D motion models and a 3-D motion model for 3-D planar object. The objectives of this work are in measuring accuracy of minimum MSE motion estimations based on 2-D motion models and in comparing convergence of an estimation process for different motion models. The accuracy is measured by numerical example using optimum motion description parameters for minimum MSE. The optimum description parameters are obtained by minimizing a mapping error function between the exact and approximate mappings. The convergence is examined for the gradient-based motion estimation algorithm on a test image sequence with or without random noise. It is shown that considering both the estimation accuracy and convergence, and the complexity of estimation procedure, 2-D motion model with either 6 or 4 description parameters can be a useful choice for motion description of planar objects nearly parallel to image plane and having translative and rotative motion. For the objects which are not parallel to image plane or undergo linear deformation, 2-D motion model with 6 parameters is required for an accurate description of motion.

Journal ArticleDOI
TL;DR: The developed software based on Discrete Cosine Transform was implemented on an on-board computer, with success, taking into account the several constraints of this space mission.
Abstract: This paper presents the study realized for the international mission of planetary exploration Phobos II (1988) concerning the coding of Phobos images on-board. The developed software based on Discrete Cosine Transform was implemented on an on-board computer, with success, taking into account the several constraints of this space mission.