diff --git a/content/code/image_video_processing/basicvideotools/_index.md b/content/code/image_video_processing/basicvideotools/_index.md index 2d2c9082..1ad07541 100644 --- a/content/code/image_video_processing/basicvideotools/_index.md +++ b/content/code/image_video_processing/basicvideotools/_index.md @@ -2,10 +2,12 @@ title: "BasicVideoTools" img: "mov.webp" image_alt: "BasicVideoTools Image" -link: "./subpages/basic_video.html" +link: "./basicvideotools/content" description: | A Matlab Toolbox with convenient functions to handle video data. It includes routines to read VQEG and LIVE databases, generate synthetic sequences with controlled 2D and 3D speed, spatio-temporal Fourier transforms, perceptual sensors and filters (V1 and MT cells), and spatio-temporal CSFs. references: - "Importance of quantiser design compared to optimal multigrid motion estimation in video coding. Malo, J., Ferri, F.J., Gutierrez, J., and Epifanio, I. Electronics Letters, 36(9):807-809, 2000." - "Video quality measures based on the standard spatial observer. Watson, A.B., and Malo, J. ICIP, 2002." +type: "code" +layout: "single" --- \ No newline at end of file diff --git a/content/code/image_video_processing/basicvideotools/content.md b/content/code/image_video_processing/basicvideotools/content.md new file mode 100644 index 00000000..c652ac56 --- /dev/null +++ b/content/code/image_video_processing/basicvideotools/content.md @@ -0,0 +1,56 @@ +--- +title: "Basic Video Tools: A Matlab Toolbox for Video Data and Spatio-Temporal Vision Models (J. Malo, J. Gutirrez and V. Laparra (c) Universitat de Valncia 1996 - 2014)" +abstract: | + # What is in BasicVideoTools? + BasicVideoTools is a Matlab/Octave Toolbox intendend to deal with video data and spatio-temporal vision models. In particular, it includes convenient *.m files to: + - Read standard (VQEG and LIVE) video data + - Rearrange video data (as for instance to perform statistical analysis) + - Generate controlled sequences (controlled contrast, texture, and 2d and 3d speed) + - Compute 3D Fourier transforms + - Play with motion perception models (spatial texture and motion-sensitive cells of LGN, V1 and MT, and spatio-temporal CSF) + - Visualize movies (achromatic only) + + # What is not in BasicVideoTools? + + BasicVideoTools does not include: + - Optical flow or motion estimation/compensation algorithms + - Video Coding algorithms + - Video Quality Mesures + + If you are looking for the above, please consider downloading other Toolboxes: + + - Motion estimation: + - Video_coding.html ([Hierarchical Block Matching](http://www.scholarpedia.org/article/Optic_flow)) + - [Video Coding (improved MPEG)](./../../videocodingtools/content) + - [Video Quality](./../../videoqualitytools/content) + + # Download BasicVideoTools! + + - [The code](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/BasicVideoTools_code.zip) (version 1.0. Use this version only for compatibility with the code in the experiments of the motion-aftereffect paper). + + - [The code](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/basic_video/BasicVideoTools_v3.zip) (version 3.0 -Not only improved sampling functions and additional motion sensitive cells, but also more things) + + - Optional data (not necessary to run the code): If you use these data please cite the VQEG and LIVE databases (for video), and the CVC Barcelona Database (for images) + + - [Image data](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/basic_video/image_data.zip) (1.8 GB). Luminance images from the CVC Barcelona Calibrated Image Database. + + - [Video data](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/basic_video/video_data.rar) (2.6 GB): Raw videos from the VQEG and LIVE video databases. + + # Installation and Requirements + + - Download the BasicVideoTools file(s) + - Decompress at your machine in the folder BasicVideoTools (no location restrictions for this folder) + - Update the matlab/octave path including all subfolders + - Tested on Matlab 2006b and posterior Matlab versions + + * Video and image data are only required if you want to gather statistics from natural videos or from natural images with controlled speed + + # How to get started? + For a general overview please take a look at the contents.m file, or (after you included it in th path) just look for help by typing the name of the folder, for instance: help BasicVideoTools_v2. + + For additional details on how to use the functions in practice, see the demos: + + - **demo_motion_programs**, demo on how to use most functions (except random dots and newtonian sequences). + - **example_random_dots_sequence**, demo on random dots sequences with controlled flow. + - **example_newtonian_sequence**, demo on physics-controlled sequences. +--- \ No newline at end of file diff --git a/content/code/image_video_processing/spatiospectraltools/_index.md b/content/code/image_video_processing/spatiospectraltools/_index.md index 0e36f3ee..c7e7b594 100644 --- a/content/code/image_video_processing/spatiospectraltools/_index.md +++ b/content/code/image_video_processing/spatiospectraltools/_index.md @@ -2,7 +2,7 @@ title: "SpatioSpectralTools" img: "constancy.webp" image_alt: "SpatioSpectralTools Image" -link: "./subpages/spatiospectral.html" +link: "./spatiospectraltools/content" description: | SpatioSpectralTools is a Matlab Toolbox for reflectance and illuminant estimation that uses spatial information to simplify the (otherwise ill-conditioned) inverse problem. The proposed analysis is useful to derive the spatio-spectral resolution required to solve a retrieval problem. references: diff --git a/content/code/image_video_processing/spatiospectraltools/content.md b/content/code/image_video_processing/spatiospectraltools/content.md new file mode 100644 index 00000000..ecff5c75 --- /dev/null +++ b/content/code/image_video_processing/spatiospectraltools/content.md @@ -0,0 +1,68 @@ +--- +title: "The role of spatial information in disentangling the irradiance-reflectance-transmitance ambiguity" +abstract: | + In the satellite hyperspectral measures the contributions of light, surface, and atmosphere are mixed. Applications need separate access to the sources. Conventional inversion techniques usually take a pixel-wise, spectral-only approach. However, recent improvements in retrieving surface and atmosphere characteristics use heuristic spatial smoothness constraints. + + In this paper we theoretically justify such heuristics by analyzing the impact of spatial information on the uncertainty of the solution. The proposed analysis allows to assess in advance the uniqueness (or robustness) of the solution depending on the curvature of a likelihood surface. In situations where pixel-based approaches become unreliable it turns out that the consideration of spatial information always makes the problem to be better conditioned. With the proposed analysis this is easily understood since the curvature is consistent with the complexity of the sources measured in terms of the number of significant eigenvalues (or free parameters in the problem). In agreement with recent results in hyperspectral image coding, spatial correlations in the sources imply that the intrinsic complexity of the spatio-spectral representation of the signal is always lower than its spectral-only counterpart. According to this, the number of free parameters in the spatio-spectral inverse problem is smaller so the spatio-spectral approaches are always better than spectral-only approaches. + + Experiments using ensembles of actual reflectance values and realistic MODTRAN irradiance and atmosphere radiance and transmittance values show that the proposed analysis successfully predicts the practical difficulty of the problem and the improved quality of spatio-spectral retrieval. + + ## Supplementary Material + 1. Extends the results in the manuscript to different spatial structures. + 2. Extends the results in the manuscript to different wavelength ranges and spatio-spectral resolutions. + 3. Statistically justifies the initialization scheme of sources. + 4. Provides sample data and code. + + The generality of the conclusion is not surprising since the imaging equation and the PCA decompositions do not depend on the specific spatio-spectral resolution or wavelength range. The joint spatio-spectral approach will simplify the problem whenever there are relations between the signal at different spatial positions, which is true in a wide range of situations given the spatial continuity of the physical sources (the reflecting objects and the atmospheric phenomena). + + ## 1. Effect of the Spatial Structure + Original and estimated reflectance images for sites of different spatial complexity (urban, forest, and open fields) using spectral-only and spatio-spectral retrieval. In these cases, the spatial resolution and wavelength range were the same as in the manuscript. No additional training was necessary, only the application of the previous analysis on new test locations. + + ## 2. Effect of Wavelength Range and Spatio-Spectral Resolution + In this experiment, we used substantially different wavelength ranges and spatio-spectral resolutions from those in the manuscript. + + ## 3. Initialization of the Sources (Surface Reflectance, Atmosphere Radiance, and Transmittance) + The retrieval procedure used to check the accuracy of the theoretical predictions involves a series of search loops that require an initialization of the variables S, A, and T. Since we searched in the decorrelated PCA domains, each coefficient of these sources was independently initialized using a random value drawn from the empirical marginal PDFs (histograms) learned at the training stage. Below we show examples of the marginal PDFs for some AC coefficients of these sources. The strong peak at zero makes zero initialization reasonable as well. + + +referencias: + - nombre: "The role of spatial information in disentangling the irradiance-reflectance-transmitance ambiguity" + autores: "Sandra Jiménez and Jesús Malo" + publicacion: "Accepted to IEEE Trans. Geosci. Rem. Sens." + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/spatiospectral/manuscr_TGRS_2012_00431.pdf" + +enlaces: + - nombre: "Download Data and Code" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/spatiospectral/code_retrieval_and_data.zip" + descripcion: "(*.zip file with data and Matlab toolbox. For running the examples, read and execute the demo_retrieval.m file)." + +imagenes: + - ruta: "org_contex.webp" + titulo: "[Original and Estimated Reflectance - Urban Context](#1-effect-of-the-spatial-structure)" + descripcion: "Reflectance images for urban, forest, and open field sites using spectral-only and spatio-spectral retrieval approaches." + + - ruta: "reco_contex.webp" + titulo: "[Reconstructed Reflectance for Different Contexts](#1-effect-of-the-spatial-structure)" + descripcion: "Reconstructed reflectance images from different contexts, highlighting the difference between spectral-only and spatio-spectral retrieval." + + - ruta: "errores.webp" + titulo: "[Error Maps for Reflectance Estimation](#2-effect-of-wavelength-range-and-spatio-spectral-resolution)" + descripcion: "Error maps showing the differences in reflectance estimation using spectral-only and spatio-spectral methods." + + - ruta: "IR_example.webp" + titulo: "[Infrared Example - Different Wavelengths and Resolutions](#2-effect-of-wavelength-range-and-spatio-spectral-resolution)" + descripcion: "Example using different wavelength ranges and spatio-spectral resolutions in reflectance retrieval." + + - ruta: "marginales_S.webp" + titulo: "[Marginal Distributions for Surface Reflectance](#3-initialization-of-the-sources-surface-reflectance-atmosphere-radiance-and-transmittance)" + descripcion: "Marginal PDFs for surface reflectance coefficients in the PCA domain." + + - ruta: "marginales_A.webp" + titulo: "[Marginal Distributions for Atmosphere Radiance](#3-initialization-of-the-sources-surface-reflectance-atmosphere-radiance-and-transmittance)" + descripcion: "Marginal PDFs for atmospheric radiance coefficients in the PCA domain." + + - ruta: "marginales_T.webp" + titulo: "[Marginal Distributions for Atmosphere Transmittance](#3-initialization-of-the-sources-surface-reflectance-atmosphere-radiance-and-transmittance)" + descripcion: "Marginal PDFs for atmospheric transmittance coefficients in the PCA domain." +--- + diff --git a/content/code/image_video_processing/videocodingtools/_index.md b/content/code/image_video_processing/videocodingtools/_index.md index f1a4916f..f025a458 100644 --- a/content/code/image_video_processing/videocodingtools/_index.md +++ b/content/code/image_video_processing/videocodingtools/_index.md @@ -2,7 +2,7 @@ title: "VideoCodingTools" img: "cubo.webp" image_alt: "VideoCodingTools Image" -link: "../soft_visioncolor/subpages/video_coding.html" +link: "./videocodingtools/content" description: | VideoCodingTools is a Matlab Toolbox for motion estimation/compensation and video compression. Optical flow computation is done with perceptually meaningful hierarchical block matching, and residual quantization is done according to non-linear Human Visual System models. references: diff --git a/content/code/image_video_processing/videocodingtools/content.md b/content/code/image_video_processing/videocodingtools/content.md new file mode 100644 index 00000000..5be6748f --- /dev/null +++ b/content/code/image_video_processing/videocodingtools/content.md @@ -0,0 +1,52 @@ +--- +title: "Motion Estimation and Video Coding Toolbox" +abstract: | + # Motion Estimation + Our approach to motion estimation in video sequences was motivated by the general scheme of the current video coders with motion compensation (such as MPEG-X or H.26X [Musmann85, LeGall91, Tekalp95]). + + In motion compensation video coders the input sequence, **A(t)**, is analized by a motion estimation system, M, that computes some description of the motion in the scene: typically the optical flow, **DVF(t)**. In the motion compensation module, **P**, this motion information can be used to predict the current frame, **A(t)**, from previous frames, **A(t-1)**. As the prediction, **(t)**, is not perfect, additional information is needed to reconstruct the sequence: the prediction error **DFD(t)**. This scheme is useful for video compression because the entropy of these two sources (motion, DVF, and errors, **DFD**) is significantly smaller than the entropy of the original sequence **A(t)**. + + The coding gain can be even bigger if the error sequence is analyzed, and quantized, in an appropriate transform domain, as done in image compression procedures, using the transform **T** and the quantizer **Q**. + + Conventional optical flow techniques (based in local maximization of the correlation by block matching) provide a motion description that may be redundant for a human viewer. Computational effort may be wasted describing 'perceptually irrelevant motions'. This inefficient behavior may also give rise to false alarms and noisy flows. To solve this problem, hierarchical optical flow techniques have been proposed (as for instance in MPEG-4 and in H.263). They start from a low resolution motion estimate and new motion information is locally added only in certain regions. However, new motion information should be added only if it is 'perceptually relevant'. Our contribution in motion estimation is a definition of 'perceptually relevant motion information' [Malo98, Malo01a, Malo01b]. This definition is based on the entropy of the image representation in the human cortex (Watson JOSA 87, Daugman IEEE T.Biom.Eng. 89): an increment in motion information is perceptually relevant if it contributes to decrease the entropy of the cortex representation of the prediction error. Numerical experiments (optical flow computation and flow-based segmentation) show that applying this definition to a particular hierarchical motion estimation algorithm, more robust and meaningful flows are obtained [Malo00b, Malo01a, Malo01b]. + + # Video Coding + As stated in the above scheme, the basic ingredients of motion compensation video coders are the motion estimation module, M, and the transform and quantization module, **T+Q**. Given our work in motion estimation and in image representation for efficient quantization, the improvement of the current video coding standards is straightforward. See [Malo01b] for a comprehensive review, and [Malo97b, Malo00a] for the original formulation and specific analysis of the relative relevance of M and **T+Q** in the video coding process. + + Here is an example [Malo00a, Malo01b] of the relative gain in the reconstructed sequence (0.27 bits/pix) obtained from isolated improvements in motion estimation (**M**) and/or image representation and quantization (**T+Q**). + + In the above distortion-per-frame plot, thick lines correspond to algorithms with poor (linear) quantization schemes and thin lines correspond to improved (non-linear) quantization schemes. Dashed lines correspond to algorithms with improved motion estimation schemes. The conclusion is that at the current bit rates an appropriate image representation and quantization is quite more important than improvements in motion estimation. + +referencias: + - nombre: "Perceptually weighted optical flow for motion-based segmentation in MPEG-4 paradigm" + autores: "J. Malo, J. Gutierrez, I. Epifanio, F. Ferri" + publicacion: "Electronics Letters, Vol. 36, 20, pp. 1693-1694 (2000)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/seg_ade2.ps" + + - nombre: "Perceptual feed-back in multigrid motion estimation using an improved DCT quantization" + autores: "J. Malo, J. Gutierrez, I. Epifanio, F. Ferri, J.M. Artigas" + publicacion: "IEEE Transactions on Image Processing, Vol. 10, 10, pp. 1411-1427 (2001)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/ieeeoct01.pdf" + + - nombre: "Importance of quantizer design compared to optimal multigrid motion estimation in video coding" + autores: "J. Malo, F. Ferri, J. Gutierrez, I. Epifanio" + publicacion: "Electronics Letters, Vol. 36, 9, pp. 807-809 (2000)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/elect00.ps" + +enlaces: + - nombre: "Motion_estimation_and_Video coding_code.zip" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Motion_estimation_and_Video%20coding_code.zip" + +imagenes: + - ruta: "coderde.webp" + titulo: "[Motion Estimation Scheme](#motion-estimation)" + descripcion: "Illustration of the video coder scheme with motion estimation and prediction error quantization." + + - ruta: "coding.webp" + titulo: "[Video Coding Example](#video-coding)" + descripcion: "Comparison of video coding schemes with different levels of motion estimation and quantization efficiency." + + - ruta: "distort.webp" + titulo: "[Distortion per Frame](#video-coding)" + descripcion: "Distortion per frame plot comparing algorithms with improved motion estimation and non-linear quantization schemes." +--- diff --git a/content/code/image_video_processing/videoqualitytools/_index.md b/content/code/image_video_processing/videoqualitytools/_index.md index 758eef18..3f3c952c 100644 --- a/content/code/image_video_processing/videoqualitytools/_index.md +++ b/content/code/image_video_processing/videoqualitytools/_index.md @@ -2,10 +2,12 @@ title: "VideoQualityTools" img: "FeatureImage_vreveal1.webp" image_alt: "VideoQualityTools Image" -link: "../soft_visioncolor/subpages/video_quality.html" +link: "./videoqualitytools/content" description: | VideoQualityTools is a Matlab Toolbox for perceptual video quality assessment based on the Standard Spatial Observer model augmented with Divisive Normalization. It performed second-best in VQEG Phase-I using no ad-hoc hand-crafted features. references: - "Importance of quantiser design compared to optimal multigrid motion estimation in video coding. Malo, J., Ferri, F.J., Gutierrez, J., and Epifanio, I. Electronics Letters, 36(9):807-809, 2000." - "Video quality measures based on the standard spatial observer. Watson, A.B., and Malo, J. ICIP, 2002." +type: "code" +layout: "single" --- \ No newline at end of file diff --git a/content/code/image_video_processing/videoqualitytools/content.md b/content/code/image_video_processing/videoqualitytools/content.md new file mode 100644 index 00000000..6485d1e9 --- /dev/null +++ b/content/code/image_video_processing/videoqualitytools/content.md @@ -0,0 +1,22 @@ +--- +title: "Video Quality Measures based on the Standar Spatial Observer (A. B. Watson and J. Malo)" +abstract: | + Video quality metrics are intended to replace human evaluation with evaluation by machine. To accurately simulate human judgement, they must include some aspects of the human visual system. + + In this paper we present a class of low-complexity video quality metrics based on the Standard Spatial Observer (SSO). In these metrics, the basic SSO model is improved with several additional features from the current human vision models. + + To evaluate the metrics, we make use of the data set recently produced by the Video Quality Experts Group (VQEG), which consists of subjective ratings of 160 samples of digital video covering a wide range of quality. For each metric we examine the correlation between its predictions and the subjective ratings. + + The results show that SSO-based models with local masking obtain the same degree of accuracy as the best metric considered by VQEG (P5), and significantly better correlations than the other VQEG models. The results suggest that local masking is a key feature to improve the correlation of the basic SSO model. + +referencias: + - nombre: "Video Quality Measures based on the Standar Spatial Observer" + autores: "A. B. Watson and J. Malo" + publicacion: "Proc. IEEE Intl. Conf. Im. Proc., Vol. 3, pp 41-44 (2002)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/icip02.pdf" + +enlaces: + - nombre: "video_metric_sso.zip" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/video_metric_sso.zip" +--- + diff --git a/content/code/image_video_processing/vistacore/_index.md b/content/code/image_video_processing/vistacore/_index.md index 450bb2b7..b909e4a8 100644 --- a/content/code/image_video_processing/vistacore/_index.md +++ b/content/code/image_video_processing/vistacore/_index.md @@ -2,7 +2,7 @@ title: "ViStaCoRe: Visual Statistics Coding and Restoration Toolbox" img: "barbara_jpeg.webp" image_alt: "ViStaCoRe Image" -link: "./subpages/kecode.html" +link: "./vistacore/content" description: | The ViStaCoRe Coding Package is a Matlab Toolbox for achromatic and color image compression that includes a set of transform coding algorithms based on (1) Human Vision Models of different accuracy, and (2) coefficient selection through Sparse Regression in local frequency domains (in particular SVR). The ViStaCoRe Restoration Package is a Matlab Toolbox for image restoration that includes (1) classical regularization techniques, (2) classical wavelet thresholding techniques, (3) regularization functionals based on non-linear Human Vision models, and (4) denoising techniques based on Kernel regression in wavelet domains. references: @@ -10,4 +10,7 @@ references: - "On the suitable domain for SVM training in image coding. Camps-Valls, G., Gutiérrez, J., Gómez-Pérez, G., and Malo, J. Journal of Machine Learning Research, 9:49-66, 2008." - "Regularization operators for natural images based on nonlinear perception models. Gutiérrez, J., Ferri, F.J., and Malo, J. IEEE Transactions on Image Processing, 15(1):189-200, 2006." - "Nonlinear image representation for efficient perceptual coding. Malo, J., Epifanio, I., Navarro, R., and Simoncelli, E.P. IEEE Transactions on Image Processing, 15(1):68-80, 2006." + +type: "code" +layout: "single" --- \ No newline at end of file diff --git a/content/code/image_video_processing/vistacore/content.md b/content/code/image_video_processing/vistacore/content.md new file mode 100644 index 00000000..35c710f1 --- /dev/null +++ b/content/code/image_video_processing/vistacore/content.md @@ -0,0 +1,151 @@ +--- +title: "ViStaCoRe: Visual Statistics Coding and Restoration" +abstract: | + **Authors:** V. Laparra, J. Gutirrez, I. Epianio, G. Gmez, J. Muoz, G. Camps-Valls, and J. Malo + + Efficient coding of visual information and efficient inference of missing information in images depend on two factors: (1) the statistical structure of photographic images, and (2) the nature of the observer that will analyze the result. Interestingly, these two factors (image regularities and human vision) are deeply related since the evolution of biological sensors seems to be guided by statistical learning. However, the simultaneous consideration of these two factors is unusual in the image processing community, particularly beyond Gaussian image models and linear models of the observer. + + In contrast, this MATLAB toolbox for image coding and restoration is simultaneously based on the well established non-Gaussian nature of visual scenes and the well-known nonlinear behavior of visual cortex. This example of combined approach is sensible since these are two sides of the same issue in vision. Specifically, the core algorithms are (1) Divisive Normalization, a canonical computation in sensory neurons with interesting statistical effects, and (2) Sparse regression (in particular Support Vector Regression) that takes into account the statistical relations between image coefficients after linear transforms. In this report we illustrate the relations between the statistical features and the perception models that justify the qualitative equivalence of these techniques. The presented toolbox wraps these related statistical and perceptual factors and includes previous methods for comparison purposes. + + This unified toolbox allows, for the first time, a fair comparison between the different factors in previous literature. As a consequence, the previous results can be seen from a new perspective: while the benefits of SVMs in local-frequency domains are confirmed in restoration, their relevance is scarce in coding once the perceptual normalization has been applied. + + # Coding Results + + See images. + + ## Image Coding schemes included in KeCode + + - JPEG-like coding: linear CSF + uniform quantizer. + - Non-uniform adaptive quantizer based on simplified masking models [Malo95, Malo99, Malo00]. + - Non-uniform adaptive quantizer based on general masking models [Malo06]. + - SVM DCT coefficient selection using simplified CSF [Robinson03]. + - SVM DCT coefficient selection using accurate CSF [Gmez05]. + - SVM coefficient selection in divisive normalized domain [Camps08]. + - SVM coefficient selection in divisive normalized domain with accurate color contrast definition [Gutirrez12]. + + # Restoration Results + + See images. + + ## Image Restoration schemes included in KeCode + + - Wavelet and Kernel based denoising methods + - SVM regression with Mutual Information Kernels (includes relations among coefficients) [Laparra10] + - Hard Thresholding [Donoho95] + - Soft Thresholding [Donoho95] + - Bayesian approach assuming Gaussian marginal PDFs [Figueiredo01] + - Regularization in local frequency domains + - Adaptive regularization functional based on perceptual divisive normalization (includes relations among coefficients) [Gutirrez06] + - L2 regularization functional [Tychonov77]. + - CSF-based regularization functional [Andrews77]. + - Adaptive Auto-Regressive regularization functionals [Banham97]. + + + +referencias: + - nombre: "Non-linear image representation for efficient perceptual coding" + autores: "J. Malo, I. Epifanio, R. Navarro, E. Simoncelli" + publicacion: "IEEE Trans. Im. Proc., 15(1):6880, 2006" + + - nombre: "Perceptual adaptive insensitivity for support vector machine image coding" + autores: "G. Gómez, G. Camps-Valls, J. Gutíerrez, J. Malo" + publicacion: "IEEE Transactions on Neural Networks, 16(6):15741581, 2005" + + - nombre: "On the suitable domain for SVM training in image coding" + autores: "G. Camps-Valls, J. Gutíerrez, G. Gómez, J. Malo" + publicacion: "Journal of Machine Learning Research, 9:4966, 2008" + + - nombre: "A Color Contrast Definition for Perceptually based Color Image Coding" + autores: "J. Gutíerrez, M.J. Luque, G. Camps-Valls, J. Malo" + publicacion: "Recent patents on Signal Processing. 2(1):33-55, 2012" + + - nombre: "Regularization operators for natural images based on nonlinear perception models" + autores: "J. Gutíerrez, F. Ferri, J. Malo" + publicacion: "IEEE Tr. Im. Proc., 15(1):189200, 2006" + + - nombre: "Image denoising with kernels based on natural image relations" + autores: "V. Laparra, J. Gutíerrez, G. Camps-Valls, J. Malo" + publicacion: "Journal of Machine Learning Research, 11:873903, 2010" + +enlaces: + - nombre: "Full Matlab Package" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/kecode/KeCoDe.zip" + descripcion: "Download the complete MATLAB toolbox for image coding and restoration." + +imagenes: + - ruta: "jpeg91_055.webp" + titulo: "[JPEG-like Coding](#coding-results)" + descripcion: "JPEG-like coding [Wallace91] with CSF and uniform quantizer." + + - ruta: "malo99_055.webp" + titulo: "[Simplified Masking](#coding-results)" + descripcion: "Image coded using simplified masking methods [Malo95, Malo99, Malo00]." + + - ruta: "malo06_055.webp" + titulo: "[General Masking](#coding-results)" + descripcion: "Image coded with general masking methods [Malo06]." + + - ruta: "robinson03_055.webp" + titulo: "[SVM with Simplified CSF](#coding-results)" + descripcion: "Support Vector Machine (SVM) coding with simplified CSF [Robinson03]." + + - ruta: "gomez05_055.webp" + titulo: "[SVM with Accurate CSF](#coding-results)" + descripcion: "Support Vector Machine (SVM) coding with accurate CSF [Gómez05]." + + - ruta: "camps08_055.webp" + titulo: "[SVM with General Masking](#coding-results)" + descripcion: "SVM with general masking in a divisive normalized domain [Camps08]." + + - ruta: "house_24.webp" + titulo: "[Original Color Image](#coding-results)" + descripcion: "Original color image with 24 bits per pixel." + + - ruta: "jpeg91_c_1.webp" + titulo: "[JPEG Coding of Color Image](#coding-results)" + descripcion: "JPEG-coded color image at 0.95 bits/pixel." + + - ruta: "simon08_c_1.webp" + titulo: "[General Masking with SVM (Color)](#coding-results)" + descripcion: "General masking combined with SVM for color image coding [Gutíerrez12]." + + - ruta: "distort_denois_im.webp" + titulo: "[Gaussian Noise Reduction](#restoration-results)" + descripcion: "Restored image after removing Gaussian noise with PSNR=25, SSIM=0.83." + + - ruta: "distort_deblur_im.webp" + titulo: "[Blur and Gaussian Noise Reduction](#restoration-results)" + descripcion: "Restored image after reducing blur and Gaussian noise with PSNR=24.6, SSIM=0.61." + + - ruta: "distort_JPEG_im.webp" + titulo: "[JPEG Noise Reduction](#restoration-results)" + descripcion: "Restored image after reducing JPEG compression noise with PSNR=25, SSIM=0.72." + + - ruta: "distort_salt_im.webp" + titulo: "[Salt and Pepper Noise Reduction](#restoration-results)" + descripcion: "Restored image after removing salt-and-pepper noise with PSNR=25.3, SSIM=0.83." + + - ruta: "restore_denois_im.webp" + titulo: "[Regularization Denoising](#restoration-results)" + descripcion: "Restored image using regularization methods for denoising." + + - ruta: "restor_deblur_im.webp" + titulo: "[Regularization Deblurring](#restoration-results)" + descripcion: "Restored image using regularization methods for deblurring and denoising." + + - ruta: "restorat_JPEG_im.webp" + titulo: "[Regularization JPEG Noise Reduction](#restoration-results)" + descripcion: "Restored image using regularization methods for removing JPEG noise." + + - ruta: "restorat_salt_im.webp" + titulo: "[Regularization Salt-and-Pepper Reduction](#restoration-results)" + descripcion: "Restored image using regularization methods for removing salt-and-pepper noise." + + - ruta: "results_denoise_wav_400_im.webp" + titulo: "[Wavelet Denoising](#restoration-results)" + descripcion: "Denoised image using wavelet and kernel-based methods." + + - ruta: "results_denoise_jpeg_wav_im.webp" + titulo: "[JPEG Noise Removal Using Wavelet](#restoration-results)" + descripcion: "Restored image using wavelet-based methods to remove JPEG compression noise." +--- \ No newline at end of file diff --git a/content/code/image_video_processing/vistaqualitytools/_index.md b/content/code/image_video_processing/vistaqualitytools/_index.md index 17303dbd..f38324c3 100644 --- a/content/code/image_video_processing/vistaqualitytools/_index.md +++ b/content/code/image_video_processing/vistaqualitytools/_index.md @@ -2,7 +2,7 @@ title: "VistaQualityTools" img: "barbara_a_medias.webp" image_alt: "VistaQualityTools Image" -link: "./subpages/vista_toolbox.html" +link: "./vistaqualitytools/content" description: | VistaQualityTools is a Matlab Toolbox for full reference color (and also achromatic) image quality assessment based on divisive normalization Human Vision models in the DCT and the Wavelet domains. references: diff --git a/content/code/image_video_processing/vistaqualitytools/content.md b/content/code/image_video_processing/vistaqualitytools/content.md new file mode 100644 index 00000000..df8c27d8 --- /dev/null +++ b/content/code/image_video_processing/vistaqualitytools/content.md @@ -0,0 +1,89 @@ +--- +title: "VistaQualityTools: The Image and Video Quality Toolbox based on Vision Models" +abstract: | + **Contributors:** + - Image: J. Malo, V. Laparra, J. Muñoz, I. Epifanio, A.M. Pons, M. Martinez and E. Simoncelli + - Video: J. Malo, J. Gutiérrez and A.B. Watson + + The image distortion measures in VistaQualityTools are based on distances between the original and the distorted scenes in the visual response domain. Therefore, they rely on the cortical descriptions in [**VistaModels**](./../../../vision_and_color/colorlab/vistamodels), including metrics based on (a) normalized DCTs, (b) normalized orthonormal wavelets, and (c) multi-layer models with normalized overcomplete wavelets. All these measures substantially overperform the widely acclaimed SSIM. + + Our video quality measure developed at the NASA Ames Research Center is based on the same visual response principle. It achieved the 2nd best performance in the VQEG evaluation phase II. + + + # Table of contents + + - [**(A) Image Quality Metrics**](#a-image-quality-metrics) + - [**Models and Distortion measures**](#models-and-distortion-measures) + - [**(B) Principled Models**](#b-principled-models) + + + # (A) Image Quality Metrics + + The distortion metrics in VistaQualityTools rely on the three cortical models we have developed over the years (a) DCT transform and Divisive Normalization [IVC 1997, Displays 00, Patt.Rec.03, IEEE Trans.Im.Proc. 06], (b) Orthonormal Wavelets and Divisive Normalization [JOSA A 10, Neur.Comp. 10], and (c) Cascades of linear transforms and nonlinear saturations [PLoS 18, Front. Neurosci. 18]. + + ## Performance in subjectively rated-databases + + ## Saturation of the distortion and perceptual differences in constant-MSE series + + # Models and Distortion measures + + ## 1995 - 2008: Metric based on linear opponent channels, local-DCT and Div. Norm. + + This metric is based on an invertible representation originally tuned to reproduce contrast response curves [Pons PhD Thesis, 1997]. It was applied to reproduce subjective distortion opinion [Im.Vis.Comp.97, Displays99] and to improve the perceptual quality of JPEG and MPEG through (a) transform coding of the achromatic channel [Eletr.Lett95, Eletr.Lett99, Im.Vis.Comp.00 Patt.Recog.03, IEEE TNN 05, IEEE TIP 06a, IEEE TIP 06b, JMLR08], (b) the color channels [RPSP12], and (c) by improving the motion estimation [LNCS97, Eletr.Lett98, Eletr.Lett00a, Eletr.Lett00b, IEEE TIP 01]. + + - **Download the Toolbox!:** [V1_model_DCT_DN_color.zip (74MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/V1_model_DCT_DN_color.zip) + + ## 2009 - 2010: Metric based on linear opponent channels, Orthogonal Wavelets and Div. Norm. + + In this metric the parameters of the divisive normalization (linear weights, interaction kernel, semisaturation, excitation and summation exponents) were fitted to reproduce subjective image distortion opinion [JOSA A 10] following exhaustive grid search as in [IEEE ICIP 02]. This model (which relies on the orthogonal wavelets of the MatlabPyrTools) was found to have excellent redundancy reduction properties [LNCS10, Neur.Comp.10]. + + - **Download the Toolbox!:** [V1_model_wavelet_DN_color.zip (14MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/V1_model_wavelet_DN_color.zip) + + ## 2013 - 2018: Metric based on a Multi-Layer network with nonlinear opponent channels, Overcomplete Wavelets and Div. Norm. + This metric is based on a multi-layer model (or biologically-plausible deep network) that performs the a chain of perceptually meaningful operations: nonlinear opponent chromatic channels, contrast computation, frequency selectivity and energy masking, and wavelet analysis + cross-subband masking [PLoS 18]. + + The parameters of the different layers were fitted in different ways: while the 2nd and 3rd layers (contrast and CSF+masking) were determined using MAximum Differentiation [Malo and Simoncelli SPIE.13], layers 1st and 4th (chromatic front-end and wavelet layer) were fitted to reproduce subjective image distortion data [PLoS 18, Front. Neurosci. 18a, Front. Neurosci. 18b]. + + - **Download the Toolbox!:** [BioMultiLayer_L_NL_color.zip (49MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/BioMultiLayer_L_NL_color.zip) + + # (B) Video Quality Measure + + Our distortion metric is based on weighting the contrast difference between the original and the distorted sequences. This adaptive weighting boosts perceptually visible features and attenuates negligible features. Once the background has been taken into account to consider masking, we compute the energy of the weighted difference using non-quadratic exponents. The parameters of these elements (widths of the filters and the masking kernels, summation exponents) were fitted to maximize the correlation with the subjective opinion. Then, we played with different versions of the model by considering subsets of the elements. We found that masking is as important as the CSF in reproducing the opinion of the observers [IEEE ICIP 02]. + + Download the Toolbox!: [video_metric_sso.zip (34kB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/video_metric_sso.zip) + + Performance of the different versions of the vision model as a function of its elements in terms of regression error. **SSO** stands for the CSF of the Standard Spatial Observer, **m** stands for masking, **t** stands for temporal filtering, **p** stands for post-summaton temporal filtering, and h stands for field doubling compensation. + + +imagenes: + - ruta: "BasesDist.webp" + titulo: "The problem" + descripcion: "Given an original image (top) distortions of different nature appear to have different perceptual effect (bottom). The challenge is computing a descriptor of distortion which is correlated with the opinion of observers collected in [subjectively rated databases](http://www.ponomarenko.info/tid2013.htm). The complexity of human vision implies that the Euclidean distance (or Mean Squared Error) is not a good proxy for subjective distortion. Nevertheless, the image quality problem goes beyond fitting any flexible model to maximize the correlation with subjective opinion (see [[Front. Neurosci. 2018](https://arxiv.org/abs/1801.09632)])." + - ruta: "metricas2.webp" + titulo: "Our solution" + descripcion: "The scatter plots show the performance of two perceptual metrics in reproducing subjective opinion. On the one hand (in red) the widely acclaimed Structural SIMilarity index (SSIM) that received the [EMMY Award of the American TV Industry in 2015](https://youtu.be/e5-LCFGdgMA), and, on the other hand (in blue), our metric based on a cascade of L+NL layers [[PLoS 2018](https://arxiv.org/abs/1711.00526)]." + - ruta: "metricas3.webp" + - ruta: "ieee02.webp" + +referencias: + - nombre: "Perceptually weighted optical flow for motion-based segmentation in MPEG-4 paradigm" + autores: "Malo, Gutiérrez, Epifanio, Ferri." + publicacion: "Electr. Lett. 36 (20):1693-1694 (2000)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/ELECT98.PS.gz" + - nombre: "Visual aftereffects and sensory nonlinearities from a single statistical framework" + autores: "V. Laparra & J. Malo." + publicacion: "Frontiers in Human Neuroscience 9:557 (2015)" + url: "https://www.frontiersin.org/articles/10.3389/fnhum.2015.00557/full" + +enlaces: + - nombre: "Updated Matlab Toolbox (VISTALAB 4.0)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/Vistalab.zip" + - nombre: "Outdated toolbox (VISTALAB 1.0)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/BasicVideoTools_code.zip" + - nombre: "Extensions of VISTALAB I: VistaVideoCoding" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/VistaVideoCoding.zip" + - nombre: "Extensions of VISTALAB II: VistaModels" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/BioMultiLayer_L_NL_color.zip" + - nombre: "Extensions of VISTALAB III: COLORLAB" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_imvideo/vista_toolbox/Colorlab.zip" +--- diff --git a/content/code/vision_and_color/colorlab/content.md b/content/code/vision_and_color/colorlab/content.md index 5b3b900a..77f56efb 100644 --- a/content/code/vision_and_color/colorlab/content.md +++ b/content/code/vision_and_color/colorlab/content.md @@ -1,7 +1,6 @@ --- title: "ColorLab: The Matlab Toolbox for Colorimetry and Color Vision" abstract: | - # The Matlab toolbox for Colorimetry and Color Vision **ColorLab** is a color computation and visualization toolbox to be used in the MATLAB environment. **ColorLab** is intended to deal with color in general-purpose quantitative colorimetric applications as color image processing and psychophysical experimentation. **ColorLab** uses colorimetrically meaningful representations of color and color images (tristimulus values, chromatic coordinates and luminance, or, dominant wavelength, purity and luminance), in any primaries system of the tristimulus colorimetry (including CIE standards as CIE XYZ or CIE RGB). **ColorLab** relates this variety of colorimetric representations to the usual device-dependent discrete-color representation, i.e. it solves the problem of displaying a colorimetrically specified scene in the monitor within the accuracy of the VGA. diff --git a/content/code/vision_and_color/colorlab/flow_wilson.md b/content/code/vision_and_color/colorlab/flow_wilson.md new file mode 100644 index 00000000..7b23baca --- /dev/null +++ b/content/code/vision_and_color/colorlab/flow_wilson.md @@ -0,0 +1,33 @@ +--- +title: "Visual Information Flow in Wilson Cowan Networks. Gómez-Villa et al. Journal of Neurophysiology 2019." +abstract: | + The Wilson-Cowan interaction of wavelet-like visual neurons is analyzed in total correlation terms for the first time. Theoretical and empirical results show that a psychophysically-tuned interaction achieves the biggest efficiency in the most frequent region of the image space. This an original confirmation of the Efficient Coding Hypothesis and suggests that neural field models can be an alternative to Divisive Normalization in image compression. + + + +referencias: + - nombre: "Visual information Flow in Wilson Cowan Networks" + autores: "A. Gómez-Villa, M. Bertalmío and J. Malo" + publicacion: " J. Neurophysiol. 123 (6): 2249-2268 (2020) https://doi.org/10.1152/jn.00487.2019" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/infoWC_JNP19.pdf" + + - nombre: "Spatio-Chromatic Information available from different Neural Layers via Gaussianization" + autores: "J. Malo" + publicacion: "[J. Mathematical Neuroscience (2020) https://doi.org/10.1186/s13408-020-00095-8](https://rdcu.be/caFYZ)" + url: "https://arxiv.org/abs/1910.01559" + + - nombre: "Information Flow in Color Appearance Neural Networks arXiv: Quantitative Biology, Neurons and Cognition" + autores: "J. Malo" + publicacion: "arXiv: Quantitative Biology, Neurons and Cognition https://arxiv.org/abs/1912.12093 (2019)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Entropy_conf_2020.pdf" + + - nombre: "Visual information Flow in Psychophysical-Physiological networks" + autores: "J. Malo and Q. Li" + publicacion: "Notebook (as of July 2021) +Evolving [Google Notebook](https://docs.google.com/document/d/14LvHeix6zE92e-T4w7e9ZmBqS6uVd4uOJ22N6-NFCc0/edit)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/NOTES_info_flow_psycho_models.pdf" + +enlaces: + - nombre: "Data and Code infoDivisiveNormalization.zip (17GB)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/infoDivisiveNormalization.zip" +--- diff --git a/content/code/vision_and_color/colorlab/spatio_chromatic.md b/content/code/vision_and_color/colorlab/spatio_chromatic.md new file mode 100644 index 00000000..77f56efb --- /dev/null +++ b/content/code/vision_and_color/colorlab/spatio_chromatic.md @@ -0,0 +1,134 @@ +--- +title: "ColorLab: The Matlab Toolbox for Colorimetry and Color Vision" +abstract: | + **ColorLab** is a color computation and visualization toolbox to be used in the MATLAB environment. **ColorLab** is intended to deal with color in general-purpose quantitative colorimetric applications as color image processing and psychophysical experimentation. + + **ColorLab** uses colorimetrically meaningful representations of color and color images (tristimulus values, chromatic coordinates and luminance, or, dominant wavelength, purity and luminance), in any primaries system of the tristimulus colorimetry (including CIE standards as CIE XYZ or CIE RGB). **ColorLab** relates this variety of colorimetric representations to the usual device-dependent discrete-color representation, i.e. it solves the problem of displaying a colorimetrically specified scene in the monitor within the accuracy of the VGA. + + A number of other interesting color representations are also provided, as CIE uniform color spaces (as CIE Lab and CIE Luv, opponent color representations based on advanced color vision models, and color appearance representations (RLab, LLab, SVF and CIECAMs). All these representations are invertible, so the result of image processing made in these colorimetrically meaningful representations can always be inverted back to the tristimulus representation at hand, and be displayed. **ColorLab** includes useful visualization routines to represent colors in the tristimulus space or in the chromatic diagram of any color basis, as well as an advanced vector quantization scheme for color palette design. An extensive color data base is also included, with the CIE 1931 color matching functions, reflectance data of 1250 chips from the Munsell Book of Color, McAdam ellipses, normalized spectra of a number of standard CIE illuminants, matrices to change to a number of tristimulus representations, and calibration data of an ordinary CRT monitor. + + The standard tools in ColorLab (and in [**VistaLab**](./../vistalab)) are the necessary building blocks to develop more sophisticated vision models included in the dedicated site [**VistaModels**](./../vistamodels). + + # Table of Contents + - [Colorfulness edition using the purity](#colorfulness-edition-using-the-purity) + - [Hue-based segmentation and edition using the dominant wavelength](#hue-based-segmentation-and-edition-using-the-dominant-wavelength) + - [Luminance edition in cd/m2](#luminance-edition-in-cd/m2) + - [Changing the spectral illumination (standard and user defined illuminants)](#changing-the-spectral-illumination-standard-and-user-defined-illuminants) + - [Playing with McAdam ellipses and Munsell chips](#playing-with-mcadam-ellipses-and-munsell-chips) + - [Chromatic induction in LLab](#chromatic-induction-in-llab) + + + + # Colorfulness edition using the purity + + Colorimetric Purity and Excitation Purity are the descriptors of colorfulness in Tristimulus Colorimetry. Both of them are available in ColorLab. In the example below we analyze the colors of an image in the CIE XYZ system and reduce the excitation purity by a constant factor leaving the luminace and the dominant wavelength unaltered in order to obtain an image with reduced colorfulness. Other posibilities to obtain this effect with ColorLab include using any other tristimulus representations or changing the colorfulness descriptors in a number of available non-linear color appearance models. + + # Hue-based segmentation and edition using the dominant wavelength + + The Dominant Wavelength is the descriptor of hue in Tristimulus Colorimetry. In the example below we first segment the flowers by selecting a range of wavelenghts (in the CIE XYZ chromatic diagram) and then, we modify their hue by applying a rotation to the chromatic coordinates. Other posibilities to obtain this effect with ColorLab include using any other tristimulus representation or changing (rotating) the hue descriptor in a number of available non-linear color appearance models. + + # Luminance edition in cd/m24 + + The Luminance is the descriptor of brightness in Tristimulus Colorimetry. In the example below we reduce the luminance by reducing the lenght of the tristimulus vectors by a constant factor in an arbitrary (RBG) tristimulus space (note how the chromatic diagram is twisted). Of course the chromatic coordinates remain the same (as can be seen in the figures below). Other posibilities to obtain this effect with ColorLab include using any other tristimulus representation or changing the brightness descriptor in a number of available non-linear color appearance models. + + # Changing the spectral illumination (standard and user defined illuminants) + + ColorLab is able to deal with the spectro-radiometric description of color images or estimate it from their (usual) colorimetric description by using the Munsell reflectances data set. In this way, the effect of changing the spectral radiance of the illuminant may be simulated by obtaining the new tristimulus values with the new illuminant. In the example below, each pixel of the original image is assumed to be a patch with a given (or estimated) reflectance under white light illumination. The user may define a different illuminant (in this case a purple radiation) and apply it to the reflectances, thus obtaining the new image and the new (tristimulus) colors. Of course, this can be done in any tristimulus representation. But, better than that, if non-linear color appearance models are used together with the corresponding pair procedure [JOSA A 04], color constancy may be predicted!. + + # Playing with McAdam ellipses and Munsell chips + + Now you can easily check the non-uniformity of the tristimulus space in your computer screen! As ColorLab comes with the McAdam ellipses database and the Munsell chips database, its color reproduction ability allows you to generate the right colors to prove that your discrimination is not Euclidean. + In the first example below, we distort two given colors (green and blue) in by a constant factor in the chromatic diagram in the principal directions of the ellipsoids. Despite the eventual inaccuracies introduced by the use of a generic calibration, it is clear that blues are more different each other (the ellipse is smaller!) and the distortion in every case is more noticeable when it is done in the short direction of the ellipse. + The second example shows a set of Munsell chips of different chroma which are chosen to depart each other a constant number of JNDs. + + # Chromatic induction in LLab + The perception of a test is modified by the stimuli in the surround. This is referred to as chromatic induction. In the example below, the (physically constant) gray test in the center changes its hue to blueish as the surround gets more yellow. Non-linear color appearance models are required to understand this effect. + + ## Key Capabilities + + - **Visualization**: Visualize color in tristimulus spaces or chromatic diagrams. + - **Transformation**: Move between tristimulus and non-linear color models like CIECAM. + - **Quantitative Processing**: Apply functions for color purity, luminance, and hue manipulation. + - **Extensive Color Database**: Includes CIE color matching functions, Munsell chips, McAdam ellipses, and more. + + ## Download ColorLab + - **Toolbox**: [Colorlab.zip (15MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Colorlab.zip) + - **User Guide**: [ColorLab_userguide.pdf (12MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/COLORLAB_userguide.pdf) + +imagenes: + - ruta: "colorlab1.webp" + titulo: "[Samples of the Munsell Book of Color illuminated](#the-matlab-toolbox-for-colorimetry-and-color-vision)" + descripcion: "Samples of the Munsell Book of Color illuminated using CIE standard illuminants D65 (top) and A (bottom). ColorLab comes with many spectral reflectances and spectral radiances of standard sources and objects. These can be used as input data to solve the corresponding pair problem [[Neur.Comp.12](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Neco_accepted_2012.pdf), [PLoS ONE 14](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Gutmann_PLOS_ONE_2014.pdf)]." + - ruta: "colorlab2.webp" + titulo: "[Pairs predicted with standard CIELab and CIECAM](#the-matlab-toolbox-for-colorimetry-and-color-vision)" + descripcion: "Corresponding pairs predicted with standard CIELab and CIECAM (implemented in Colorlab, left) are compared with our statistically-based algorithms: the nonlinear Sequential Principal Curves Analysis (top-right) [Neur.Comp.12](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Neco_accepted_2012.pdf), and the linear Higher Order Canonical Correlation Analysis (bottom-right) [PLoS ONE 14](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Gutmann_PLOS_ONE_2014.pdf)." + - ruta: "colorfulness.webp" + titulo: "[Desaturating clock](#colorfulness-edition-using-the-purity)" + descripcion: "Colorimetric Purity and Excitation Purity are the descriptors of colorfulness in Tristimulus Colorimetry. Both of them are available in ColorLab. In the example below we analyze the colors of an image in the CIE XYZ system and reduce the excitation purity by a constant factor leaving the luminace and the dominant wavelength unaltered in order to obtain an image with reduced colorfulness. Other posibilities to obtain this effect with ColorLab include using any other tristimulus representations or changing the colorfulness descriptors in a number of available non-linear color appearance models." + - ruta: "hue1.webp" + titulo: "[Artificial Flowers (Original)](#hue-based-segmentation-and-edition-using-the-dominant-wavelength)" + descripcion: "Red flowers are segmented by selecting the colors in a certain range of dominant wavelengths." + - ruta: "hue2.webp" + titulo: "[Artificial Flowers (Modified)](#hue-based-segmentation-and-edition-using-the-dominant-wavelength)" + descripcion: "Rotation of the corresponding chromatic coordinates leads to a series of artificial flowers." + - ruta: "luminance.webp" + titulo: "[Marilyn in dim light](#luminance-edition-in-cd/m2)" + descripcion: "The reduction in the length of the tristimulus does not change the intersection with the chromatic diagram." + - ruta: "irradiance.webp" + titulo: "[The pink room key](#changing-the-spectral-illumination-standard-and-user-defined-illuminants)" + descripcion: "Digital images can be turned into spectral arrays and these can be illuminated with customized light." + - ruta: "mcadam.webp" + titulo: "[Color discrimination (McAdam ellipses, top) and Uniformly distributed colors (Munsell chips, bottom)](#playing-with-mcadam-ellipses-and-munsell-chips)" + descripcion: "Color discrimination (McAdam ellipses, top): Bigger discrimination in the blue-purple region than in the green region. Anisotropic JNDs in color is an example of the MAximum Differentiation (MAD) concept [Malo & Simoncelli SPIE 15](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/malo15a-reprint.pdf). Uniformly distributed colors (Munsell chips, bottom): Constant perceptual differences in Munsell chips imply they distribute in ellipsoids around the white point similarly to the corresponding McAdam ellipse." + - ruta: "color_junto.webp" + titulo: "[Prediction of induced color with LLab](#chromatic-induction-in-llab)" + descripcion: "The Llab non-linear color representation was used to compute the corresponding colors of the central test in a gray surround. The results are shown in the CIE xy diagram. Note that as the surround increases the colorfulness, an oposite reaction is induced in the test. This numerical result was used to generate a set of different stimuli in a constant gray background giving rise to the same perception as the central test on a changing background (see below)." + +referencias: + - nombre: "ColorLab: the Matlab toolbox for Colorimetry and Color Vision. Univ. Valencia 2002" + autores: "J. Malo & M.J. Luque. " + url: "#" + - nombre: "Corresponding-pair procedure: a new approach to simulation of dichromatic color perception" + autores: "P. Capilla, M. Diez, M.J. Luque, & J. Malo" + publicacion: "JOSA A 21(2): 176-186 (2004)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/josa_04.pdf" + - nombre: "Nonlinearities and Adaptation of Color Vision from Sequential Principal Curves Analysis" + autores: "V. Laparra, S. Jimenez, G. Camps & J. Malo" + publicacion: "Neural Computation 24(10): 2751-2788 (2012)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Neco_accepted_2012.pdf" + - nombre: "Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images" + autores: "M. Gutmann, V. Laparra, A. Hyvarinen & J. Malo" + publicacion: "PLoS ONE 9(2): e86481 (2014)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Gutmann_PLOS_ONE_2014.pdf" + - nombre: "Visual aftereffects and sensory nonlinearities from a single statistical framework" + autores: "V. Laparra & J. Malo" + publicacion: "Frontiers in Human Neuroscience 9:557 (2015)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/LaparraMalo15.pdf" + - nombre: "Effect of a Yellow Filter on Brightness Evaluated by Asymmetric Matching: Measurements and Predictions" + autores: "M.J. Luque, et al." + publicacion: "J. Opt. A - Pure Appl. Opt. (Inst. of Physics), 8 (5): 398-408 (2006)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Luque06.pdf" + - nombre: "Analyzing the metrics of the perceptual space in a new multistage physiological colour vision model" + autores: "E. Chorro, F.M. Martínez‐Verdú, D. de Fez, P. Capilla, & M.J. Luque" + publicacion: "Color Res. Appl., 34: 359-366 (2009)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Chorro09.pdf" + - nombre: "Images Perceived after Chromatic or Achromatic Contrast Sensitivity Losses" + autores: "M.J. Luque, et al." + publicacion: "Optom. Vision Sci., 87 (5): 313-322 (2010)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Luque10.pdf" + - nombre: "Simulating Images Seen by Patients with Inhomogeneous Sensitivity Losses" + autores: "P. Capilla, M.J. Luque, M. Diez" + publicacion: "Optom. Vision Sci., 89 (10): 1543-1556 (2012)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Capilla12.pdf" + - nombre: "Software for simulating dichromatic perception of video streams" + autores: "M.J. Luque, D. de Fez, & P. Acevedo" + publicacion: "Color Res. Appl., 39: 486-491 (2014)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Luque14.pdf" + +enlaces: + - nombre: "Matlab Toolbox (version 4.0)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Colorlab.zip" + - nombre: "Colorlab User Guide" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/COLORLAB_userguide.pdf" +--- + diff --git a/content/code/vision_and_color/colorlab/vistalab.md b/content/code/vision_and_color/colorlab/vistalab.md index b1b12f69..fd819c5b 100644 --- a/content/code/vision_and_color/colorlab/vistalab.md +++ b/content/code/vision_and_color/colorlab/vistalab.md @@ -42,7 +42,14 @@ abstract: | # Controlled spatio-temporal stimuli - The movies below illustrate the abilities of VistaLab for accurate motion control. First row: includes sequences of the motion of a lambertian rigid body evolving in a gravitatory field with inelastic restrictions recorded from different points of view, this example allows arbitrary locations of the illumination and camera. In this case the actual motion in 3D world and the optical flow (motion in the retinal plane) are known. Second row: includes an example of random dots moving according to arbitrary optical flow fields. Third row: shows how static pictures can be animated using spatially uniform flows of arbitrary speed leading to interesting shape-from-motion effects in the case of noise patterns. Fourth row: shows different movies of the same periodic pattern moving at progressively increasing speeds. Aliasing introduces speed reversal at the expected place, as demonstrated by the Fourier diagrams below. + The movies below illustrate the abilities of VistaLab for accurate motion control. + - **First row:** includes sequences of the motion of a lambertian rigid body evolving in a gravitatory field with inelastic restrictions recorded from different points of view, this example allows arbitrary locations of the illumination and camera. In this case the actual motion in 3D world and the optical flow (motion in the retinal plane) are known. + + - **Second row:** includes an example of random dots moving according to arbitrary optical flow fields. + + - **Third row:** shows how static pictures can be animated using spatially uniform flows of arbitrary speed leading to interesting shape-from-motion effects in the case of noise patterns. + + - **Fourth row:** shows different movies of the same periodic pattern moving at progressively increasing speeds. Aliasing introduces speed reversal at the expected place, as demonstrated by the Fourier diagrams below. # Extensions of VistaLab VistaLab only addresses the linear part of the neural mechanisms that mediate the preattentive perception of spatio-temporal patterns. However, it doesnt combine these mechanisms to compute motion (optical flow), it doesnt include the nonlinear interactions between the linear mechanisms, and it doesnt include color. @@ -55,9 +62,6 @@ abstract: | - **Video Synthesis**: Create controlled video sequences with specific spatio-temporal properties. - **Fourier Domain Tools**: Visualize spatio-temporal frequency response of neural models. - ## Download VistaLab - - **Toolbox**: [Vistalab.zip (30MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab.zip) - - **User Guide**: [VistaLab_userguide.pdf (12MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab_userguide.pdf) imagenes: - ruta: "noise.gif" @@ -163,8 +167,6 @@ referencias: enlaces: - nombre: "Matlab Toolbox (version 4.0)" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab.zip" - - nombre: "VistaLab User Guide" - url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab_userguide.pdf" - nombre: "Extensions of VistaLab I: VistaVideoCoding" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/VistaVideoCoding.zip" - nombre: "Extensions of VistaLab II: BioMultiLayer_L_NL_color" diff --git a/content/code/vision_and_color/colorlab/vistamodels.md b/content/code/vision_and_color/colorlab/vistamodels.md index 57e7afd5..1d158805 100644 --- a/content/code/vision_and_color/colorlab/vistamodels.md +++ b/content/code/vision_and_color/colorlab/vistamodels.md @@ -1,67 +1,129 @@ --- -title: "VistaLab: The Matlab Toolbox for Linear Spatio-Temporal Vision Models" +title: "VistaModels: Computational models of Visual Neuroscience" abstract: | - **VistaLab** is a Matlab toolbox that provides the linear building-blocks to create spatio-temporal vision models and the tools to control the spatio-temporal properties of video sequences. These building blocks include the spatio-temporal receptive fields of LGN, V1, and MT cells, and the spatial and spatio-temporal Contrast Sensitivity Functions (CSFs). Additionally, **VistaLab** allows accurate spatio-temporal sampling, spatio-temporal Fourier domain visualization, and generation of video sequences with controlled texture and speed. Tools for video sequence generation include noise, random dots, and rigid-body animations with Lambertian reflectance. + The Toolboxes in the VistaModels site are organized in three categories of different nature: [**(a) Empirical-mechanistic Models**](#a-empirical-mechanistic-models), tuned to reproduce basic phenomena of color and texture perception, [**(b) Principled Models**](#b-efficient-coding-in-mechanistic-models), derived from information theoretic arguments, and [**(c) Engineering-motivated Models**](#c-engineering-motivated-models), developed to address applied problems in image and video processing. + + The algorithms in **VistaModels** require the standard building blocks provided in the (more basic) toolboxes VistaLab and ColorLab. However, the necessary functions from these more basic toolboxes are included in the packages listed below for the user convenience. - The perception and video synthesis tools enable accurate illustrations of the visibility of achromatic spatio-temporal patterns. Linear filters in **VistaLab** provide rough approximations of pattern visibility, which can be enhanced with non-linear models available in related toolboxes. + # Table of contents - The **standard tools in VistaLab** (and **ColorLab**) are essential for building more sophisticated vision models, available on the **VistaModels** dedicated site. + - [**(A) Empirical-mechanistic Models**](#a-empirical-mechanistic-models) + - [1995 - 2008: Linear opponent color channels, local-DCT and Divisive Normalization](#1995---2008-linear-opponent-color-channels-local-dct-and-divisive-normalization) + - [2009 - 2010: Linear opponent color channels, Orthogonal Wavelet and Divisive Normalization](#2009---2010-linear-opponent-color-channels-orthogonal-wavelet-and-divisive-normalization) + - [2013 - 2018: Multi-Layer network with nonlinear opponent color, Overcomplete Wavelet and Divisive Normalization](#2013---2018-multi-layer-network-with-nonlinear-opponent-color-overcomplete-wavelet-and-divisive-normalization) + - [2019 - 2021: Convolutional and differentiable implementations](#2019---2021-convolutional-and-differentiable-implementations) + - [Psychophysical test-bed for model tuning and comparison](#psychophysical-test-bed-for-model-tuning-and-comparison) + - [Model Comparison](#model-comparison) + - [**(B) Principled Models**](#b-principled-models) + - [Efficient coding in mechanistic models](#efficient-coding-in-mechanistic-models) + - [Statistically-based linear receptive fields](#statistically-based-linear-receptive-fields) + - [Statistically-based nonlinearities](#statistically-based-nonlinearities) + - [**(C) Engineering-motivated Models**](#c-engineering-motivated-models) + - [Perceptually-weighted motion estimation: VistaVideoCoding](#perceptually-weighted-motion-estimation-vistavideocoding) + - [Image Coding: VistaCoRe](#image-coding-vistacore) + - [Image and Video Quality: VistaQualityTools](#image-and-video-quality-vistaqualitytools) - # Table of Contents - - [Retina and LGN](#retina-and-lgn) - - [V1 Cortex](#v1-cortex) - - [MT Region](#mt-region) - - [Spatio-temporal Contrast Sensitivities](#spatio-temporal-contrast-sensitivities) - - [Controlled Spatio-temporal Stimuli](#controlled-spatio-temporal-stimuli) - - [Extensions of VistaLab](#extensions-of-vistalab) + # (A) Empirical-mechanistic Models - ## Retina and LGN - **VistaLab** provides implementations of LGN receptive fields with center-surround configurations, supporting various configurations like monophasic and biphasic temporal responses. These can generate artificial retinas and simulate neural responses to natural movies using Fourier domain convolution methods. + Cascades of linear transforms and nonlinear saturations are ubiquitous in neuroscience and artificial intelligence ever since the [[McCulloch-Pitts model](http://www.scholarpedia.org/article/Models_of_visual_cortex)]. More recently this has been exemplified in subtractive and divisive models of cortical interaction [Wilson & Cowan, Kybernetik 73; Carandini and Heeger, Nature Rev. Neurosci. 12]. + + Over the years, we have developed progressively better versions of such cascades to be applicable to color images and video sequences. These parametric models were empirically tuned to give a rough description of different color and texture perception phenomena (see the [psychophysical test-bed](#psychophysical-test-bed-for-model-tuning-and-comparison) below for model tuning and comparison). - ## V1 Cortex - V1 simple cells are modeled with Gabor-like receptive fields tuned to spatial and temporal frequencies. **VistaLab** enables the construction of artificial cortices and visualizes neural responses to natural movies. + See a visual example of the effect of the local spatial-frequency transforms and the divisive normalization below (illustration of the 2018 model) - ## MT Region - **VistaLab** supports models for MT cells, which are narrow-band tuned for speed. It allows visualizing the optimal patterns for MT neurons and computing their responses to natural movies. + ## 1995 - 2008: Linear opponent color channels, local-DCT and Divisive Normalization - ## Spatio-temporal Contrast Sensitivities - **VistaLab** includes several CSFs: achromatic and chromatic (red-green, yellow-blue) spatial CSFs and spatio-temporal CSFs. These CSFs are useful for applying perceptual sensitivity in image and video processing. + This model is invertible and was originally tuned to reproduce contrast response curves obtained from contrast incremental thresholds [Pons PhD Thesis, 1997]. It was applied to reproduce subjective distortion opinion [[Im.Vis.Comp.97](https://www.sciencedirect.com/science/article/abs/pii/S0262885696000042), [Displays 99](https://www.sciencedirect.com/science/article/abs/pii/S0141938299000098)] and to improve the perceptual quality of JPEG and MPEG through (a) transform coding of the achromatic channel [[Elect.Lett.95](https://www.uv.es/vista/vistavalencia/papers/ELECT95.PS.gz), [Elect.Lett.99](https://www.uv.es/vista/vistavalencia/papers/ELECT99.PS.gz), [Im.Vis.Comp.00](https://www.uv.es/vista/vistavalencia/papers/ivc99.ps.gz), [IEEE TIP 01](https://www.uv.es/vista/vistavalencia/papers/ieeeoct01.pdf), [Patt.Recog.03](https://www.uv.es/vista/vistavalencia/papers/patt_rec03.pdf), [IEEE TNN 05](https://www.uv.es/vista/vistavalencia/papers/SVM_JND8_ACCEPTED.pdf), [IEEE TIP 06a](https://www.uv.es/vista/vistavalencia/papers/manuscript4.pdf), [JMLR08](https://www.uv.es/vista/vistavalencia/papers/camps_JMLR_08.pdf)], (b) the color channels [[RPSP12](https://www.eurekaselect.com/96168/article)], and (c) by improving the motion estimation [[LNCS97](https://www.uv.es/vista/vistavalencia/papers/LNCS97.PS.gz), [Elect.Lett.98](https://www.uv.es/vista/vistavalencia/papers/ELECT98.PS.gz), [Elect.Lett.00a](https://www.uv.es/vista/vistavalencia/papers/seg_ade2.ps), [Elect.Lett.00b](https://www.uv.es/vista/vistavalencia/papers/elect00.ps), [IEEE TIP 01](https://www.uv.es/vista/vistavalencia/papers/ieeeoct01.pdf)]. - ## Controlled Spatio-temporal Stimuli - **VistaLab** can generate controlled stimuli, such as rigid-body motion, random dots, and periodic pattern motion, demonstrating visual effects like speed reversal due to aliasing. + - **Download the Toolbox!:** [V1_model_DCT_DN_color.zip (74MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/V1_model_DCT_DN_color.zip) + + ## 2009 - 2010: Linear opponent color channels, Orthogonal Wavelet and Divisive Normalization - ## Extensions of VistaLab - Extensions include **VistaVideoCoding** for perceptually weighted optical flow, **BioMultiLayer_L_NL_color** for nonlinear neural interactions, and **ColorLab** for color processing. + Even though we developed our own Matlab code for some specific overcomplete wavelets in the mid 90's [[MSc Thesis 95](http://www.uv.es/vista/vistavalencia/papers/tesis/msc_jmalo.zip), [J.Mod.Opt.97](https://www.uv.es/vista/vistavalencia/papers/JMO97.PS.gz)], it took some time until we applied the Divisive Normalization interaction to Simoncelli's wavelets in MatlabPyrTools (which are substantially more efficient). The model was fitted to reproduce subjective image distortion opinion [[JOSA A 10](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Laparra_JOSA_10.pdf)] following exhaustive grid search as in [[IEEE ICIP 02](https://www.uv.es/vista/vistavalencia/papers/icip02.pdf)]. This model (which relies on the orthogonal wavelets of the MatlabPyrTools) was found to have excellent redundancy reduction properties [[LNCS10](https://link.springer.com/chapter/10.1007/978-3-642-11509-7_3), [Neur.Comp.10](https://www.uv.es/vista/vistavalencia/papers/Malo_Laparra_Neural_10b.pdf)]. - ## Key Capabilities - - **Spatio-temporal Modeling**: Build models for LGN, V1, and MT neural responses. - - **Contrast Sensitivity**: Apply achromatic and chromatic CSFs to video and images. - - **Video Synthesis**: Create controlled video sequences with specific spatio-temporal properties. - - **Fourier Domain Tools**: Visualize spatio-temporal frequency response of neural models. + - **Download the Toolbox!:** [V1_model_wavelet_DN_color.zip (14MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/V1_model_wavelet_DN_color.zip) - ## Download VistaLab - - **Toolbox**: [Vistalab.zip (30MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab.zip) - - **User Guide**: [VistaLab_userguide.pdf (12MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab_userguide.pdf) + ## 2013 - 2018: Multi-Layer network with nonlinear opponent color, Overcomplete Wavelet and Divisive Normalization -imagenes: - - ruta: "RF_LGN.gif" - titulo: "[Pulsating on-center LGN neuron](#retina-and-lgn)" - descripcion: "Receptive field of an on-center LGN neuron, showing excitation (white) and inhibition (black) in the spatiotemporal domain." - - ruta: "RF_V1.gif" - titulo: "[Pulsating V1 neurons](#v1-cortex)" - descripcion: "Receptive fields of V1 neurons tuned to specific spatio-temporal frequencies, visualized in the spatiotemporal domain." - - ruta: "response_MT.gif" - titulo: "[MT neural response to a natural stimulus](#mt-region)" - descripcion: "MT neurons' responses to a natural movie, with each set of neurons tuned to specific speeds and features." - - ruta: "csf_st.JPG" - titulo: "[Spatio-temporal CSF](#spatio-temporal-contrast-sensitivities)" - descripcion: "The spatio-temporal Contrast Sensitivity Function (CSF) with saccade compensation applied, showing different representations." - - ruta: "response_CSF.gif" - titulo: "[Natural movie filtered by spatio-temporal CSF](#spatio-temporal-contrast-sensitivities)" - descripcion: "A natural movie filtered using the spatio-temporal CSF." - - ruta: "aliasing.JPG" - titulo: "[Speed reversal due to aliasing](#controlled-spatio-temporal-stimuli)" - descripcion: "High-speed periodic patterns showing speed reversal due to aliasing, visualized in the Fourier domain." + Even though we developed a comprehensive color vision toolbox in the early 2000's (see [ColorLab](./../content) ), it took some time until we included a fully adaptive chromatic front-end before the spatial processing models based on overcomplete wavelets. Note that the older toolboxes rely on (too rough) linear RGB to YUV transforms. This multi-layer model (or biologically-plausible deep network) performs the following chain of perceptually meaningful operations [[PLoS 18](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0201326)]. + + The parameters of the different layers were fitted in different ways: while the 2nd and 3rd layers (contrast and CSF+masking) were determined using Maximum Differentiation [[Malo and Simoncelli SPIE 15](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/malo15a-reprint.pdf)], layers 1st and 4th (chromatic front-end and wavelet layer) were fitted to reproduce subjective image distortion data [[PLoS 18](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0201326)], and then fine-tuned to reproduce classical masking [[Front. Neurosci. 19](https://www.frontiersin.org/articles/10.3389/fnins.2019.00008/full)]. + + - **Download the Toolbox!:** [BioMultiLayer_L_NL_color.zip (49MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/BioMultiLayer_L_NL_color.zip) + + ## 2019 - 2021: Convolutional and differentiable implementations + + The matrix formulation developed in [[PLoS 18](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0201326), [Front. Neurosci. 19](https://www.frontiersin.org/articles/10.3389/fnins.2019.00008/full)] and implemented in BioMultiLayer_L_NL_color is elegant but not applicable to large images nor appropriate to be included in python deep-learning schemes since it is implemented in Matlab. Recently we worked to solve these issues and confirm the choices of the chromatic part. This led to the deep Percepnet [[IEEE ICIP 20](https://ieeexplore.ieee.org/document/9190691)], and to the convolutional version the above MultiLayer L+NL cascade [J.Vision, Proc. VSS 2021]. While Percepnet has the advantage of being implemented in python and hence ready for automatic differentiation (state-of-the-art in image quality), it has the disadvantage of being based on a restricted version of Divisive Normalization (no explicit interactions in space/scale) [[ICLR 17](https://openreview.net/forum?id=rJxdQ3jeg)]. On the other hand, the BioMultiLayer_L_NL_color_convolutional has a more general and interpretable version of the Divisive Normalization (in includes full range of interactions in space/scale/orientation). Moreover, the color adaptation choices and the scaling of the achromatic and chromatic channels has been confirmed by positive psychophysical and statistical behaviors [[J. Neurophysiol.19](https://journals.physiology.org/doi/abs/10.1152/jn.00487.2019), [J. Math.Neurosci.20](https://mathematical-neuroscience.springeropen.com/articles/10.1186/s13408-020-00095-8)]. However, derivatives are implemented in matlab, so it is not ready to be included in deep-learning schemes right away. There is a lot of room for improvement of its parameters!. + + - **Download the Toolbox!:** [BioMultiLayer_L_NL_color_convolutional.zip (76MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/BioMultiLayer_L_NL_color_convolutional.zip) + + - **Visit Github!:** [Perceptnet](https://github.com/alexhepburn/perceptnet) + + - **Statistical and Psychophysical support for the chromatic choices (I/II):** The small scale model in this paper uses the same chromatic choices as the BioMultiLayer_L_NL models. [Code and Data for small scale recurrent Wilson-Cowan network [J.Neurophysiol. 2020]](./../flow_wilson) + + - **Statistical and Psychophysical support for the chromatic choices (II/II):** The small scale model in this paper uses the same chromatic choices as the BioMultiLayer_L_NL models. [Code and Data for small scale Div. Norm. [J.Math.Neurosci. 2020]](./../spatio_chromatic) + + ## Psychophysical test-bed for model tuning and comparison + + The figure below (computed using [VISTALAB](./../vistalab) and [ColorLab](./../content)) illustrates distinctive features of early vision: (a) the bandwidth of the achromatic and the chromatic channels is markedly different, (b) the response to contrast is a saturating nonlinearity, its slope (sensitivity) depends on the frequency and the response attenuates as a function of the properties of the background (note how the test is more salient -highlighted in green- on top of a very different background while it is masked -highlighted red- on top of similar backgrounds), and (c) the visibility of i.i.d. noise seen on top of a natural image is not uniform: e.g. visibility is smaller in high contrast regions. + + These quite visible facts can be used to tune the parameters of the mechanistic models considered above. One could play with the parameters by hand until the response curves qualitatively reproduce what one actually sees. We suggested this idea to improve model fit in natural image databases [Front.Neurosci.18] and (for the first time!) here is data and code to perform such tune-it-yourself experiments: [experiments_VistaModels.zip (400MB)](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/experiments_VistaModels.zip). + + File is huge because it contains thousands of tests to compute detailed contrast response curves and distortion measures on the TD database. Moreover, it also has the corresponding responses of the three mechanistic models!. + + Results below suggest that models are equivalent but the most recent displays better behavior (on top of having more plausible receptive fields [[PLoS 18](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0201326)]). More importantly, while the results on Image Quality are way better than the popular Structural Similarity Index SSIM (see VistaQualityTools ) there is still a lot of room for improvement through these tune-it-yourself experiments!. + + ## Model Comparison + + # (B) Principled Models + + ## Efficient coding in mechanistic models + + We have shown that models including point-wise Weber-like saturation for brightness lead to decreasing signal-to-noise ratio as a function of the luminance [J.Opt.95]. Moreover, taking into account more general cascades of linear+nonlinear layers (e.g. local-frequency transforms and divisive normalization after Weber-brightness) we have seen that the efficiency of such systems (in terms of redundancy reduction) decreases with luminance and contrast, which is consistent with the distribution of natural images in local frequency domains [PLoS 18]. We have seen that the discrimination ability of Local-DCT+Div.Norm. models is bigger in the more populated regions of the frequency-amplitude domain [Im.Vis.Comp.97]. Additionally, we have seen that the mutual information between the coefficients of the image representation progressively reduces from the retina to the normalized representation, both in the local-DCT + DN case [IEEE TIP 06] and in the Orthogonal wavelet+DN case [Neur.Comp.10](https://www.uv.es/vista/vistavalencia/papers/Malo_Laparra_Neural_10b.pdf). + + The above body of results means that the Mechanistic Models considered above display remarkable adaptation to the natural image statistics. + + In the same line, in collaboration with NYU (Balle and Simoncelli) we have optimized the described linear+nonlinear architectures for optimal autoencoding. By including both the linear and the nonlinear parts in the optimization we get unprecedented rate-distortion performances (see paper and code here [[ICLR 17](http://www.cns.nyu.edu/~lcv/iclr2017)]), way better than our previous image coders based on V1 models with fixed linear stages (See the [VistaCoRe](./../../../image_video_processing/vistacore/content) Toolbox). + + ## Statistically-based linear receptive fields + + Statistical goals such as decorrelation (Principal Component Analysis, PCA), and Independent Component Analysis (ICA) many time lead to sensible linear receptive fields when trained with natural scenes. For instance, spatio-spectral PCA leads to compact representations to disentangle reflectance and spectral illumination from retinal irradiance and lead to spatial-frequency sensors with smooth spectral response [IEEE TGRS 13] (see VistaSpatioSpectral). In collaboration with Helsinki University (Gutman and Hyvarinen) we explored ICA-related techniques. Complex ICA led to local and oriented receptive fields in phase quadrature [LNCS11] (download the [Complex ICA](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/CICA_toolbox.zip) Toolbox). Higher Order Canonical Correlation Analysis (HOCCA) combines the sparsity goal with optimal correspondence between identified features in domain adaptation problems leading to biologically plausible spatiochromatic receptive fields which adapt to changes in the illumination (PLoS 14, see the [HOCCA](./../../../feature_extraction/hocca/content) Toolbox). + + This analysis of ICA methods concluded with a refutation of a classical result in cortical organization based on Topographica ICA: in fact (as opposed to Hyvarinen & Hoyer Vis. Res. 2001) it does not lead to orientation domains [PLoS 17]. See code and results to analyze [TICA](./../../tica/content) receptive fields. + + ## Statistically-based nonlinearities + + Instead of optimizing the mechanistic models for efficient coding we tried a stronger approach to test the Efficient Coding Hypothesis: use pure data-driven techniques instead of assuming models which already have the right functional form. We developed a family of invertible techniques for manifold unfolding and for manifold Gaussianization. + + The unfolding techniques identify nonlinear sensors that follow curved manifolds. These include Sequential Principal Curves Analysis [**SPCA**](./../../../feature_extraction/spca/content) and sequels: Principal Polynomial Analysis [**PPA**](./../../../feature_extraction/ppa/content) and Dimensionality Reduction based on Regression, [**DDR**](./../../../feature_extraction/ddr/content). + + The Gaussianization technique (Rotation-Based Iterative Gaussianization, [**RBIG**](./../../../feature_extraction/rbig/content)) does not identify sensors but it allows to compute the PDF. Therefore it is useful to define discrimination regions according to information maximization or error minimization. See the kind of predictions made by these unfolding techniques (SPCA [Network 06, NeCo12, Front. Human Neurosci.15, ArXiv 16, https://arxiv.org/pdf/1606.00856.pdf], and PPA-DRR [SPIE13, Int.J.Neur.Syst.14, IEEE Sel.Top.Sig.Proc.15]) and by the Gaussianization technique [[Talk at LeCun Lab NYU 13](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/2013_Courant_features_RBIG.pdf), IEEE TNN 11]. + + Closely related to optimal discrimination (or optimal metric) for error minimization is the concept of Fisher Information. Our lab has a tradition in the study of Riemannian metrics induced by nonlinear perception systems [J. Malo PhD 99, Displ.99]. Over the years, the ideas about the geometrical transforms induced by the system and their effect on information processing have evolved from distance computation to the consideration of the transformation of neural noise [Displ.99, Patt.Recog.03, IEEE TIP 06, JOSA A 10, SPIE 15, NIPS 17, PLoS 18]. + + # (C) Engineering-motivated + + ## Perceptually-weighted motion estimation: VistaVideoCoding + + What can be predicted is not worth transmitting!. This simple idea is the core of predictive coding used in most successful video coders (e.g. MPEG). In predictive coding motion information is the key to predict future-from-past. MPEG-like coders first compute the optical flow (or displacement field) and encode the prediction error in a transformed domain which (not surprisingly!) is similar to the [V1 mechanistic models](#a-empirical-mechanistic-models) described above. + + In this video-coding context we improved motion estimation by connecting the optical flow computation with the perceptual relevance of the prediction error: we proposed to improve the resolution of the motion estimate only if the prediction error was hard to encode for our improved V1 models [LNCS97, Electr.Lett.98, J.Vis.01]. This gave rise to smoother motion flows more appropriate for motion-based segmentation [Electr.Lett.00a], and to better video coders [Electr.Lett.00b, IEEE TIP 01]. + + - Download the motion estimation and video coding toolbox! [VistaVideoCoding](./../../../image_video_processing/videocodingtools/content). + + ## Image Coding: VistaCoRe + + Image compression requires vision models that rank visual features according to their perceptual relevance so that extra bits can be allocated to encode the subjectively important aspects of the image. + + The vision model based on DCT and Divisive Normalization considered above leads to better decoded images at the same compression ratio than JPEG and variants based on simpler models of masking. + + See the [VistaCoRe](./../../../image_video_processing/vistacore/content) (Coding and Restoration Toolbox), and the references [Eletr.Lett95, Eletr.Lett99, Im.Vis.Comp.00 Patt.Recog.03, IEEE TNN 05, IEEE TIP 06a, IEEE TIP 06b, JMLR08]. + + ## Image and Video Quality: VistaQualityTools + + Computing perceptual distances between images requires vision models that identify relevant and negligible visual features. Distortions in features that will be neglected by the observers should induce no effect in the distance. And the other way around for visually relevant features. The different models can be quantitatively compared by their accuracy in reproducing the opinion of viewers in subjectively rated databases. + + The three vision models considered above (based on DCTs, orthonormal wavelets, and overcomplete wavelets) have been used to propose distortion metrics that overperform SSIM. See [VistaQualityTools](./../../../image_video_processing/vistaqualitytools/content), and the references [Im.Vis.Comp.97, Displays99, Patt.Recog.03, IEEE Trans.Im.Proc.06] for the DCT metric, [JOSA 10, Neur.Comp.10] for the orthogonal wavelet metric, and [PLoS 18, Frontiers Neurosci.18] for the metric based on overcomplete wavelets. referencias: - nombre: "VistaLab: The Matlab Toolbox for Spatio-Temporal Vision. Univ. Valencia 1997" @@ -71,20 +133,90 @@ referencias: autores: "J. Malo, et al." publicacion: "Electronics Letters 36(20): 1693-1694 (2000)" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/ELECT98.PS.gz" - - nombre: "Visual Aftereffects and Sensory Nonlinearities from a Single Statistical Framework" + - nombre: "Visual aftereffects and sensory nonlinearities from a single statistical framework" autores: "V. Laparra & J. Malo" - publicacion: "Frontiers in Human Neuroscience 9:557 (2015)" - url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/LaparraMalo15.pdf" + url: "https://www.frontiersin.org/articles/10.3389/fnhum.2015.00557/full" + +imagenes: + - ruta: "VistaModels1.webp" + titulo: "Mechanistic Models" + descripcion: "Following Hubel-Wiesel and McCulloch-Pitts, our models are cascades of two basic elements: (a) a linear transform (not necessarily convolutional set of receptive fields), and (b) a nonlinear saturation (either divisive or subtractive) describing the interactions between the linear units. We have played with different versions of such elements. For the linear part we explored center-surround units, local-DCTs, Orthonormal Wavelets, Overcomplete Wavelets and Laplacian Pyramids. For the nonlinear part played with different adaptive nonlinearities such as the Divisive Normalization and the subtractive Wilson-Cowan equations. See [[PLoS 2018](https://arxiv.org/abs/1711.00526)] for a comprehensive account of the maths, and [[ArXiV 2018](https://arxiv.org/abs/1804.05964)] for the equivalence between the considered nonlinear models. These models have been tuned to reproduce basic psychophysics such as contrast response curves and subjective image distortion." + + - ruta: "VistaModels2.webp" + titulo: "Statistical Principles" + descripcion: "The emergence of (a) specific sensors (e.g. the red and green curves), or (b) specific discrimination properties (ellipsoids in gray) may be understood as an adaptation to the statistics of natural input (samples in blue). We have used these [Barlow-style information-theoretic priciples](https://www.youtube.com/watch?v=cv9hje42i_E) in two ways: unfolding the data manifolds [[Front. Human Neurosci. 15](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/LaparraMalo15.pdf)], and Gaussianizing the data manifolds [IEEE Trans. Neur. Nets. 11]. Interestingly, nonlinearities of the Human Visual System (from retina [[J.Opt.95](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/JOPT95.PS.gz)] to cortex [[Im.Vis.Comp.00](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/ivc99.ps.gz), [Neural Comp.10](https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Malo_Laparra_Neural_10b.pdf)]) have remarkable statistical effects too!." + + - ruta: "modelB.webp" + titulo: "[Multi-Layer Network Model](#2013---2018-multi-layer-network-with-nonlinear-opponent-color-overcomplete-wavelet-and-divisive-normalization)" + descripcion: "Multilayer network model that includes nonlinear chromatic processing and overcomplete wavelets." + + - ruta: "facts1.webp" + titulo: "[Facts of Vision - Achromatic and Chromatic Bandwidths](#psychophysical-test-bed-for-model-tuning-and-comparison)" + descripcion: "Ilustración de características distintivas de la visión temprana, como las diferentes bandas de frecuencia para los canales acromáticos y cromáticos." + + - ruta: "visib_noise.webp" + titulo: "[Noise Visibility on Natural Images](#psychophysical-test-bed-for-model-tuning-and-comparison)" + descripcion: "Visualization of noise visibility in natural images, showing lower visibility in high contrast regions." + + - ruta: "compCSFs.webp" + titulo: "[Model Comparison - CSFs](#b-principled-models)" + descripcion: "Comparison of Contrast Sensitivity Functions (CSF) between different mechanistic models." + + - ruta: "compResponses.webp" + titulo: "[Model Comparison - Response Curves](#b-principled-models)" + descripcion: "Response curves of different vision models, adjusted to reproduce psychophysical phenomena." + + - ruta: "compNoise.webp" + titulo: "[Model Comparison - Noise Visibility](#b-principled-models)" + descripcion: "Comparison of noise visibility in images across various vision models." + + - ruta: "principled.webp" + titulo: "[Principled Models - Efficient Coding](#efficient-coding-in-mechanistic-models)" + descripcion: "Example of how mechanistic models are adapted to natural image statistics for redundancy reduction." + + - ruta: "autoencoder.webp" + titulo: "[Autoencoder for Optimal Representation](#efficient-coding-in-mechanistic-models)" + descripcion: "Representation of an autoencoder optimized for unprecedented performance in image coding." + + - ruta: "LinearStats.webp" + titulo: "[Statistically-based Linear Receptive Fields](#statistically-based-linear-receptive-fields)" + descripcion: "Linear receptive fields derived from statistical techniques such as PCA and ICA trained on natural scenes." + + - ruta: "ResponsesSPCA1.webp" + titulo: "[SPCA Responses 1](#statistically-based-nonlinearities)" + descripcion: "Sensory response based on Sequential Principal Curves Analysis (SPCA), showing sensors adapted to the nonlinear properties of the visual system." + + - ruta: "ResponsesSPCA2.webp" + titulo: "[SPCA Responses 2](#statistically-based-nonlinearities)" + descripcion: "Another illustration of sensory responses using SPCA techniques." + + - ruta: "neuro_rbig.webp" + titulo: "[Gaussianization of Nonlinear Manifolds](#statistically-based-nonlinearities)" + descripcion: "Use of Gaussianization to define optimal discrimination regions based on information maximization or error minimization." + + - ruta: "metricFisher.webp" + titulo: "[Fisher Information Metric for Vision Models](#statistically-based-nonlinearities)" + descripcion: "Representation of the Fisher information concept and its application in evaluating nonlinear perception systems." + + - ruta: "flow_800_10_excel.gif" + titulo: "[Optical Flow for Video Coding](#perceptually-weighted-motion-estimation-vistavideocoding)" + descripcion: "Perceptually enhanced optical flow used for predictive coding in video compression." + + - ruta: "coding.webp" + titulo: "[Image Coding and Restoration](#image-coding-vistacore)" + descripcion: "Image coding using vision models based on DCT and Divisive Normalization to improve the quality of compressed images." enlaces: - - nombre: "Matlab Toolbox (version 4.0)" + - nombre: "Updated Matlab Toolbox (VISTALAB 4.0)" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab.zip" - - nombre: "VistaLab User Guide" - url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Vistalab_userguide.pdf" - - nombre: "Extensions of VistaLab I: VistaVideoCoding" + - nombre: "Outdated toolbox (VISTALAB 1.0)" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/BasicVideoTools_code.zip" + - nombre: "Front. Human Neurosci. 15 paper" + url: "https://www.frontiersin.org/articles/10.3389/fnhum.2015.00557/full" + - nombre: "Extensions of VISTALAB I: VistaVideoCoding" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/VistaVideoCoding.zip" - - nombre: "Extensions of VistaLab II: BioMultiLayer_L_NL_color" + - nombre: "Extensions of VISTALAB II: VistaModels" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/BioMultiLayer_L_NL_color.zip" - - nombre: "Extensions of VistaLab III: ColorLab" + - nombre: "Extensions of VISTALAB III: ColorLab" url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Colorlab.zip" ---- +--- \ No newline at end of file diff --git a/content/code/vision_and_color/tica/_index.md b/content/code/vision_and_color/tica/_index.md new file mode 100644 index 00000000..38f3bf3e --- /dev/null +++ b/content/code/vision_and_color/tica/_index.md @@ -0,0 +1,13 @@ +--- +title: "Topographic ICA" +img: "fig_web_3.webp" +image_alt: "Topographic ICA Image" +link: "./tica/content" +description: | + Topographic Independent Component Analysis (TICA) is a method for learning topographically organized features from visual data. It extends the Independent Component Analysis (ICA) algorithm by incorporating spatial proximity, aiming to model how neurons in the visual cortex represent information with localized and continuous receptive fields. TICA has been applied to understand orientation domains in the brain's visual cortex, but studies have shown that the model may produce discontinuous, scrambled orientation maps, contradicting experimental observations. +references: + - "Hyvärinen, A. and Hoyer, P. (2001). Topographic Independent Component Analysis." + - "Ma, L. et al. (2008). Neurocomputing: An application of TICA in visual representation." +type: "code" +layout: "single" +--- diff --git a/content/code/vision_and_color/tica/content.md b/content/code/vision_and_color/tica/content.md new file mode 100644 index 00000000..4d82b801 --- /dev/null +++ b/content/code/vision_and_color/tica/content.md @@ -0,0 +1,141 @@ +--- +title: "Topographic ICA reveals random scrambling of orientation in visual space" + +abstract: | + # Table of Contents + - [Orientation domains and the proposed analysis](#1-orientation-domains-and-the-proposed-analysis) + - [Extra examples of continuity violations in the Topographic ICA literature](#2-extra-examples-of-continuity-violations-in-the-topographic-ica-literature) + - [Main Result: salt-and-pepper distribution of TICA oriented sensors](#3-main-result-salt-and-pepper-distribution-of-tica-oriented-sensors) + - [Full set of statistical tests on randomness](#4-full-set-of-statistical-tests-on-randomness) + - [Extra results for images of bigger complexity and other settings of the algorithm](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm) + + # Supplementary Material: extra figures, data and code + + ## 1. Orientation domains and the proposed analysis: + This section describes the experimental orientation domains in the surface of the V1 cortex of a cat and a ferret. Using intrinsic optical imaging, different preferred orientations of the cells are represented with colors. The analysis shows that the Topographic ICA (TICA) topology fails to explain the smoothness found in the retina-cortex projection, contrary to what was proposed by Hyvärinen and Hoyer. The random scrambling of the oriented filters revealed in this study demonstrates that TICA does not account for the organization of orientation domains in primates. + + ## 2. Extra examples of continuity violations in the Topographic ICA literature: + Multiple examples of continuity violations in TICA are presented from various sources in the literature, including works by Hyvärinen et al. (2001, 2009) and Ma et al. (2008). In each case, there are clear violations of the expected local continuity of orientation domains in the image space. These results suggest that the functional explanation proposed by TICA is inconsistent with empirical observations in the retina-cortex projection. + + ## 3. Main Result: salt-and-pepper distribution of TICA oriented sensors: + The main finding of this analysis is that TICA produces a salt-and-pepper distribution of oriented filters in the image space, rather than continuous orientation domains. This inconsistency with the smooth retina-cortex projection is demonstrated across various visual angles and resolutions. New training sets were used for this analysis, and the results were consistent with the lack of continuity in the spatial distribution of TICA sensors. + + ## 4. Full set of statistical tests on randomness: + Statistical tests were conducted to determine whether TICA's orientation domains are more similar to a Cartesian grid or a random sample. The tests show that TICA’s spatial distribution is random and does not form the distinct, continuous orientation domains observed in biological visual systems. These results are based on KL-divergence comparisons between the TICA distributions and uniform distributions. + + ## 5. Extra results for images of bigger complexity and other settings of the algorithm: + Further experiments were performed with more complex images and alternative algorithm settings, such as different nonlinearities and pooling neighborhoods. In every case, the results were consistent: the oriented filters produced by TICA remained scrambled and discontinuous. Even in cases with significantly higher complexity and larger PCA dimensions, TICA fails to produce locally continuous orientation domains in the image space. + + + +imagenes: + - ruta: "fig_web_1.webp" + titulo: "[Summary](#1-orientation-domains-and-the-proposed-analysis)" + descripcion: "The novel analysis consist of projecting the TICA topology back in the image space by representing the central location of the receptive field of each artificial neuron at the corresponding spatial position with the corresponding orientation-dependent color code. The experimental smoothness suggested by [Bosking et al. 02] implies that a proper theory would lead to locally continuous orientation domains also in the retinal space. Our results show that this is not the case for TICA." + +imagenes: + - ruta: "fig_web_1.webp" + titulo: "[Summary](#1-orientation-domains-and-the-proposed-analysis)" + descripcion: "The novel analysis consists of projecting the TICA topology back into the image space by representing the central location of each artificial neuron’s receptive field at the corresponding spatial position with the corresponding orientation-dependent color code. The experimental smoothness suggested by [Bosking et al. 02] implies that a proper theory would lead to locally continuous orientation domains also in the retinal space. Our results show that this is not the case for TICA." + + - ruta: "fig_web_2.webp" + titulo: "[Orientation domains in image space](#1-orientation-domains-and-the-proposed-analysis)" + descripcion: "Single cell measurements show that displacement across neurons in the cortical surface results in equivalent displacement of the corresponding receptive fields in the visual field. The orientation preferences of the receptive fields vary smoothly, as revealed by intrinsic imaging. Stimulation with vertically and horizontally displaced lines shows that the retina-cortex projection is smooth, suggesting distorted orientation domains in the image space." + + - ruta: "fig_web_3.webp" + titulo: "[Proposed analysis of TICA results](#1-orientation-domains-and-the-proposed-analysis)" + descripcion: "Our proposal involves analyzing the TICA results in the image space. By projecting the TICA topology onto the image space, we observe that the distribution of orientation preferences is inconsistent with the smoothness found in the retina-cortex projection. This finding contradicts the original proposal by Hyvärinen and Hoyer." + + - ruta: "fig_libro_2009v2.webp" + titulo: "[Continuity violations in TICA (2009)](#2-extra-examples-of-continuity-violations-in-the-topographic-ica-literature)" + descripcion: "This example from Hyvärinen, Hurri, and Hoyer (2009) shows how larger pooling neighborhoods in the TICA model can still result in random mixtures of sensors, violating the expected local continuity of orientation domains." + + - ruta: "fig_xinos_2008v2.webp" + titulo: "[Continuity violations in TICA (2008)](#2-extra-examples-of-continuity-violations-in-the-topographic-ica-literature)" + descripcion: "An overcomplete version of TICA, as used by Ma et al. (2008), also demonstrates clear violations of the expected continuity in orientation domains, with random mixtures of sensors appearing frequently." + + - ruta: "result1.webp" + titulo: "[Main Result: salt-and-pepper distribution of TICA sensors](#3-main-result-salt-and-pepper-distribution-of-tica-oriented-sensors)" + descripcion: "The main result of this work shows that TICA produces a salt-and-pepper distribution of orientation sensors rather than continuous orientation domains. This is observed consistently across different visual angles and resolutions." + + - ruta: "test.webp" + titulo: "[Statistical tests for orientation domains](#4-full-set-of-statistical-tests-on-randomness)" + descripcion: "KL-divergence tests compare the distribution of TICA sensors with a Cartesian grid and with uniform samples. These tests confirm that the spatial distribution of TICA sensors is more similar to random sampling than to distinct, continuous orientation domains." + + - ruta: "non_linearities_complete.webp" + titulo: "[Results with different nonlinearities](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm)" + descripcion: "Comparing different nonlinearities in the learning process, we find that the preferred locations of sensors in the image space remain scrambled across different nonlinearities, confirming the random nature of the resulting orientation maps." + + - ruta: "Neighborhood_complete.webp" + titulo: "[Results with different neighborhood settings](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm)" + descripcion: "Results from experimenting with different pooling regions in the TICA model show that changing the size of pooling neighborhoods does not lead to continuous orientation domains, as the orientation maps remain scrambled." + + - ruta: "result2.webp" + titulo: "[Results with increased image complexity](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm)" + descripcion: "Increasing the complexity of the images used in TICA by expanding the field of view (up to 100x100 pixels) reveals that, even at this higher complexity, the distribution of oriented filters remains randomly scrambled in the image space." + + - ruta: "comparison_convergence.webp" + titulo: "[Convergence comparison](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm)" + descripcion: "This comparison of convergence shows that while many filters have yet to converge, the filters that have already emerged display a random distribution of orientations, confirming the salt-and-pepper pattern." + + - ruta: "functions_100x100A.webp" + titulo: "[Functions 100x100 - A](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm)" + descripcion: "An example of the functions that emerged from TICA with 100x100 pixel images. The distribution of these filters shows random scrambling of orientation in the retinal space." + + - ruta: "functions_100x100B.webp" + titulo: "[Functions 100x100 - B](#5-extra-results-for-images-of-bigger-complexity-and-other-settings-of-the-algorithm)" + descripcion: "Another set of functions that emerged from the 100x100 pixel images, showing that even in this higher complexity case, the oriented filters remain randomly distributed." + +referencias: + - nombre: "Topographic ICA reveals random scrambling of orientation in visual space" + autores: "M. Martinez-Garcia, L.M. Martinez-Otero, J. Malo" + publicacion: "PLoS ONE, 2017" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Plos_Orient_maps_iteration2.pdf" + + - nombre: "Experimental procedure" + autores: "J. Brotons" + publicacion: "Internal lab report, 2016" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/measurement.pdf" + + - nombre: "Example ferret data" + autores: "" + publicacion: "" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/example_ferret_data.zip" + + - nombre: "Hyvärinen and Hoyer Topographic ICA Toolbox" + autores: "A. Hyvärinen, P. Hoyer" + publicacion: "ImageICA Toolbox" + url: "https://research.ics.aalto.fi/ica/imageica/" + + - nombre: "Matlab CODE" + autores: "" + publicacion: "" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/OrientationDomainsTICA.zip" + + - nombre: "README for Matlab Code" + autores: "" + publicacion: "" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/README.txt" + + - nombre: "Full statistical tests" + autores: "" + publicacion: "" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/full_test.pdf" + + - nombre: "PLoS ONE 2006" + autores: "Ohki et al." + publicacion: "Nature, 2006" + url: "" + + - nombre: "Topographic ICA findings on continuity violations" + autores: "Hyvärinen et al." + publicacion: "Vision Research, 2001" + url: "" + +enlaces: + - nombre: "Matlab Toolbox" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/bmi.zip" + + - nombre: "Project Proposal BFU2014-58776" + url: "https://huggingface.co/datasets/isp-uv-es/Web_site_legacy/resolve/main/code/soft_visioncolor/Project_Martinez_Malo_BFU2014_58776_R.pdf" +--- diff --git a/content/people/full_professors/gustau_camps_valls.md b/content/people/full_professors/gustau_camps_valls.md index a8b0f0ae..bf0cbbbb 100644 --- a/content/people/full_professors/gustau_camps_valls.md +++ b/content/people/full_professors/gustau_camps_valls.md @@ -11,8 +11,8 @@ params: twitter: 'http://twitter.com/isp_uv_es' github: 'https://github.com/IPL-UV/' semanticscholar: 'https://www.semanticscholar.org/author/1397959153' - uv: '/old_pages/people/gcamps.html' - link_image: '/old_pages/people/gcamps.html' + uv: 'https://www.uv.es/gcamps/' + link_image: 'https://www.uv.es/gcamps/' --- My research is related to statistical learning for modeling and understanding the Earth system. diff --git a/content/people/full_professors/jesus_malo.md b/content/people/full_professors/jesus_malo/_index.md similarity index 77% rename from content/people/full_professors/jesus_malo.md rename to content/people/full_professors/jesus_malo/_index.md index 54ca5d52..a7d5b7b8 100644 --- a/content/people/full_professors/jesus_malo.md +++ b/content/people/full_professors/jesus_malo/_index.md @@ -10,11 +10,13 @@ params: orcid: 'https://orcid.org/0000-0002-5684-8591' twitter: 'https://twitter.com/jesusmalo32' github: 'https://github.com/IPL-UV/' - uv: '/old_pages/people/malo.html' - link_image: '/old_pages/people/excathedra.html' + uv: 'https://www.uv.es/jmalo/' + link_image: './full_professors/jesus_malo/ex_cathedra' +type: "people" +layout: "single" --- -I'm interested in understanding human vision from information theoretic principles. This statistical view has implications in experimental and computational neuroscience. [See the ex-cathedra statement](/old_pages/people/excathedra.html) +I'm interested in understanding human vision from information theoretic principles. This statistical view has implications in experimental and computational neuroscience. [See the ex-cathedra statement]./jesus_malo/excathedra) diff --git a/content/people/full_professors/jesus_malo/ex_cathedra.md b/content/people/full_professors/jesus_malo/ex_cathedra.md new file mode 100644 index 00000000..d130c6e9 --- /dev/null +++ b/content/people/full_professors/jesus_malo/ex_cathedra.md @@ -0,0 +1,91 @@ +--- +title: "First words ex-cathedra" +abstract: | + # Jesús Malo (San Francisco, Starbucks at 390 Stockton St., February 2015) + + Circa 2015, applications for full professorship in Spanish universities (cathedra) involved writing an essay to describe your career and personal views on science. Here is what I wrote to get the condition of **Accredited University Professor** from the official National Evaluation Agency... + + Now (after the positive outcome in July 2015), I upload the version with uncensored pictures, full text, and over 150 hyperlinks!. These are my first words ex-cathedra (even though my salary, as well as the salary of over 2500 colleagues in the same situation, will remain the same for a while unless we do something): + + --- + + ## Table of Contents + 1. [Why a physicist would ever care about Human Vision?](#1-why-a-physicist-would-ever-care-about-human-vision) + 2. [Chronological summary of my career](#2-chronological-summary-of-my-career) + 3. [My research contributions](#3-my-research-contributions) + 4. [My teaching activities](#4-my-teaching-activities) + 5. [Economic constraints of science in Spain](#5-economic-constraints-of-science-in-spain) + + --- + + ## 1. Why a physicist would ever care about Human Vision? + + **Think again: human vision is cool!** + + The leitmotif of my research and teaching activity is the study of **visual information processing in the human brain**. This is a biological and subjective problem: not very appealing adjectives for a "big-bang theory guy." Nevertheless, the aspects of this problem that may be of interest for physicists determined the direction of my scientific career. + + Despite the overuse of the word **multidisciplinary**, you have to consider that **Visual Perception** is a truly multidisciplinary problem. On one hand, the input signal certainly involves plain **Physics** (such as light emission and scattering in everyday scenes, i.e., classical **Radiometry**) and image formation in biological systems (**Physiological Optics**). On the other hand, the analysis of such input signal is a problem for **Neuroscience**, studying natural neural networks for image understanding. + + Human Vision is not limited to classical Optics (Newton's laws), but also involves understanding how the sensors (the visual cortex) process these signals. Explaining visual cortex phenomena requires concepts from **Statistics** and **Information Theory**, or today's jargon, **Machine Learning**. Interestingly, the system being studied (the human brain) can also inspire new mathematical approaches. + + This problem is fascinating for a physicist due to the complex dynamics of the visual brain, where quantitative theories are recent and still under discussion. The study of **Vision** combines experiments, mathematical theories, and technological applications, which aligns with the physicists’ approach. Vision research involves **Psychology**, **Optometry**, **Neurophysiology** (through **Psycho-Physics**), and applications in **Image Processing** and **Computer Vision**. + + I have made contributions (or introduced some **colored noise** 😉) in most of the disciplines mentioned above over the last 20 years. + + --- + + ## 2. Chronological summary of my career + + **While Khun and Marx were kind of wrong, Sinatra was right: I did it my way!** + + (Chronological details of collaborations and scientific progression will be included here, emphasizing the multidisciplinary collaboration throughout my career, including contributions to Optics, Neuroscience, and Engineering.) + + --- + + ## 3. My research contributions + + **Colored noise in vision sciences and some thoughts on the h-index** + + My work spans several areas: from **Vision Science** experiments to applications in **Image Processing**. This section includes a breakdown of my contributions, including: + + - Experiments in **Physiological Optics**, **Psychophysics**, and **Image Statistics**. + - Development of **empirical models** of **Texture**, **Color**, and **Motion Vision**. + - Formulation of **principled models** of neural adaptation and information transmission. + - Advancements in **Statistical Learning** and feature extraction techniques. + + (A comprehensive summary of these contributions, with key publications and links, will follow.) + + --- + + ## 4. My teaching activities + + **Like Richard Dawkins in a Republican Convention** + + My teaching activities over the last 19 years have been a mix of challenges and rewards, particularly in trying to convey the quantitative aspects of **Vision Science** to **Optometry** students, who typically have non-quantitative inclinations. This section highlights the methodologies and tools I developed to make complex mathematical concepts accessible, including tools like **COLORLAB** and **VirtualNeuroLabs**. + + In addition to teaching **Optometry**, I have also lectured in PhD and Master’s programs in **Mathematics** and **Computer Science**, contributing to the training of future scientists. + + --- + + ## 5. Economic constraints of science in Spain + + **Why positive evaluations for professorship do not imply actual positions here?** + + Economic constraints have profoundly affected the scientific landscape in Spain, particularly since the 2008 crisis. Although I have been fortunate enough to build a career, the funding cuts and lack of professorship positions are hindering the career progression of many accredited professors. An **association of accredited professors** has been formed to demand solutions for this blocked career situation, with the support of major worker unions. You can read more about it [here](http://acreditadosacatedra.blogspot.com.es/). + + --- + + For further details on each of these sections, including my research contributions and teaching philosophy, I have included over **150 hyperlinks** throughout the text, providing access to my full publications, tools, and additional resources. + + + +imagenes: + - ruta: "https://www.youtube.com/embed/0G0rXqvwR0Q" + titulo: "[Motion illusions and Entropy](#1-why-a-physicist-would-ever-care-about-human-vision)" + descripcion: "An example of this surprising behavior is the Static Motion Aftereffect (or the perception of reverse motion after prolonged exposure to a slowly moving pattern -see video-). Physicists like explanations from first principles (the so called laws), and this illusion can be understood according to a law based on communication theory. Sensors that maximize information transmission from sequences happen to have similar frequency tuning to motion sensitive neurons in V1 cortex. For the same efficiency reason, their response is nonlinear, and attenuates in the presence of high contrast moving patterns. Exposure to such patterns induces an operation regime that leads to the illusion while the system readapts to the new situation. Optimal Information Transmission seems to be a law of Human Vision. [Find out more...] It took me 20 years to fully understand that sentence." + + - ruta: "AA_new_york_university.webp" + titulo: "[More on the link between Physics, Neuroscience, and Statistics](#1-why-a-physicist-would-ever-care-about-human-vision)" + descripcion: "If you are not already convinced of the relation (since you only listen to authority arguments ;-) I have something for you. The New York University (36 Nobel Laureates, wikipedia dixit ;-) organizes its resources in this way: the Physics Department and the Center for Neural Science are in the very same building (both doors in the picture below lead to the same hall, and physiology and theoretical physics labs are interleaved). Moreover, the Courant Institute of Mathematics, famous for its research in Statistical Learning is exactly at the other side of the street (Washington Place)." + +--- \ No newline at end of file diff --git a/static/images/adicionales/spect3.webp b/static/images/adicionales/spect3.webp new file mode 100644 index 00000000..a7e87d37 Binary files /dev/null and b/static/images/adicionales/spect3.webp differ diff --git a/static/js/mode.js b/static/js/mode.js new file mode 100644 index 00000000..40e214de --- /dev/null +++ b/static/js/mode.js @@ -0,0 +1,17 @@ +function openModal(imageUrl, title, description) { + var modal = document.getElementById("imageModal"); + var modalImage = document.getElementById("modalImage"); + var modalCaption = document.getElementById("modalCaption"); + // Convertir el título y descripción de Markdown a HTML usando Marked.js + var htmlTitle = marked.parse(title); + var htmlDescription = marked.parse(description); + modalImage.src = imageUrl; + // Insertar el título y la descripción convertidos a HTML + modalCaption.innerHTML = `${htmlTitle}
${htmlDescription}`; + modal.style.display = "block"; +} + + function closeModal() { + var modal = document.getElementById("imageModal"); + modal.style.display = "none"; +} \ No newline at end of file diff --git a/static/style/style.css b/static/style/style.css index 13a7de35..de4babb1 100644 --- a/static/style/style.css +++ b/static/style/style.css @@ -951,6 +951,14 @@ body.section-contact .navbar-nav .nav-link[href="/contact/"] { grid-template-columns: 1fr; } +.grid-layout:not(:has(.box-gallery)):not(:has(.box-enlaces)):not(:has(.box-references)) { + grid-template-areas: + "header header" + "abstract abstract" + "abstract abstract"; + grid-template-columns: 1fr; +} + /* Title one */ .box-header { @@ -1183,6 +1191,71 @@ body.section-contact .navbar-nav .nav-link[href="/contact/"] { text-decoration: none; } + +/* The Modal (background) */ +.modal { + display: none; + position: fixed; + z-index: 1000; + padding-top: 100px; + left: 0; + top: 0; + width: 100%; + height: 100%; + overflow: auto; + background-color: rgba(0, 0, 0, 0.8); /* Black with opacity */ + margin: auto; +} + + +/* Modal Content (image) */ +.modal-content { + margin: auto; + display: block; + width: auto; + max-height: 80%; + padding: auto; +} + +/* Caption of Modal Image */ +.modal-caption { + margin: auto; + display: block; + width: 80%; + text-align: center; + color: #ccc; + padding: 10px 0; +} + +/* Add Animation */ +.modal-content, .modal-caption { + animation-name: zoom; + animation-duration: 0.6s; +} + +@keyframes zoom { + from {transform: scale(0)} + to {transform: scale(1)} +} + +/* The Close Button */ +.modal-close { + position: absolute; + top: 40px; + right: 35px; + color: #f1f1f1; + font-size: 40px; + font-weight: bold; + transition: 0.3s; +} + +.modal-close:hover, +.modal-close:focus { + color: #bbb; + text-decoration: none; + cursor: pointer; +} + @media (max-width: 1200px) { .grid-layout { grid-template-areas: @@ -1221,6 +1294,14 @@ body.section-contact .navbar-nav .nav-link[href="/contact/"] { "references"; grid-template-columns: 1fr; } + + .grid-layout:not(:has(.box-gallery)):not(:has(.box-enlaces)):not(:has(.box-references)) { + grid-template-areas: + "header" + "abstract" + "abstract"; + grid-template-columns: 1fr; + } } @media (max-width: 768px) { diff --git a/themes/isp_uv/layouts/_default/causal.html b/themes/isp_uv/layouts/_default/causal.html deleted file mode 100644 index 6b7b3371..00000000 --- a/themes/isp_uv/layouts/_default/causal.html +++ /dev/null @@ -1,48 +0,0 @@ - - -{{ define "main" }} -
-
-
-

{{ .Params.title }}

-
- {{ range .Pages }} - {{ $img := urls.JoinPath "/images/adicionales" .Params.img }} - -
- -
-

{{ .Title }}

-
- -
-
- -
- - {{ .Params.image_alt }} - -
- -
-
-

{{ .Params.description | markdownify }}

-
-
- -
-
References
-
    - {{ range .Params.references }} -
  • {{ . }}
  • - {{ end }} -
-
-
-
-
- {{ end }} -
-

{{ .RawContent | markdownify }}

-
-{{ end }} \ No newline at end of file diff --git a/themes/isp_uv/layouts/_default/causal_inference/list.html b/themes/isp_uv/layouts/_default/causal_inference/list.html deleted file mode 100644 index 6b7b3371..00000000 --- a/themes/isp_uv/layouts/_default/causal_inference/list.html +++ /dev/null @@ -1,48 +0,0 @@ - - -{{ define "main" }} -
-
-
-

{{ .Params.title }}

-
- {{ range .Pages }} - {{ $img := urls.JoinPath "/images/adicionales" .Params.img }} - -
- -
-

{{ .Title }}

-
- -
-
- -
- - {{ .Params.image_alt }} - -
- -
-
-

{{ .Params.description | markdownify }}

-
-
- -
-
References
-
    - {{ range .Params.references }} -
  • {{ . }}
  • - {{ end }} -
-
-
-
-
- {{ end }} -
-

{{ .RawContent | markdownify }}

-
-{{ end }} \ No newline at end of file diff --git a/themes/isp_uv/layouts/code/single.html b/themes/isp_uv/layouts/code/single.html index a963d457..3aefd94e 100644 --- a/themes/isp_uv/layouts/code/single.html +++ b/themes/isp_uv/layouts/code/single.html @@ -14,7 +14,7 @@

{{ .Title }}

{{ end }} + {{ end }} + {{ if .Params.referencias }}

References

+ {{ end }} {{ if .Params.enlaces }}

Download

diff --git a/themes/isp_uv/layouts/partials/head.html b/themes/isp_uv/layouts/partials/head.html index a0c54678..e385fae1 100644 --- a/themes/isp_uv/layouts/partials/head.html +++ b/themes/isp_uv/layouts/partials/head.html @@ -9,4 +9,7 @@ crossorigin="anonymous" referrerpolicy="no-referrer" /> + + + diff --git a/themes/isp_uv/layouts/people/single.html b/themes/isp_uv/layouts/people/single.html new file mode 100644 index 00000000..3aefd94e --- /dev/null +++ b/themes/isp_uv/layouts/people/single.html @@ -0,0 +1,85 @@ +{{ define "main" }} +
+
+

{{ .Title }}

+
+
+

{{ .Params.abstract | markdownify }}

+
+ {{ if .Params.imagenes }} + + {{ end }} + {{ if .Params.referencias }} +
+

References

+
    + {{ range .Params.referencias }} +
  • + {{ if .url }} + {{ .nombre }} + {{ else }} + {{ .nombre }} + {{ end }}
    + {{ .autores }}
    + {{ .publicacion | markdownify }} +
  • + {{ end }} +
+
+ {{ end }} + {{ if .Params.enlaces }} +
+

Download

+ {{ if .Params.desc_download }} +
+

{{ .Params.desc_download | markdownify }}

+
+ {{ end }} + +
+ {{ end }} +
+{{ end }}