fbpx
Sigmoidal
  • Home
  • LinkedIn
  • About me
  • Contact
No Result
View All Result
  • Português
  • Home
  • LinkedIn
  • About me
  • Contact
No Result
View All Result
Sigmoidal
No Result
View All Result

Histogram Equalization with OpenCV and Python

Carlos Melo by Carlos Melo
July 16, 2024
in Computer Vision
0
117
SHARES
3.9k
VIEWS
Share on LinkedInShare on FacebookShare on Whatsapp

Histogram is a concept that is present directly or indirectly in practically all computer vision applications. In histogram equalization, we aim for the full spectrum of intensities, distributing the pixel values more evenly along the x-axis.

After all, by adjusting the distribution of pixel values, we can enhance details and reveal characteristics that were previously hidden in the original image.

Therefore, understanding the concept and learning how to apply the technique is fundamental for working on digital image processing problems. In this tutorial, you will learn the theory and how to equalize histograms in digital images using OpenCV and Python.

Click here to download the source code to this post

What is an Image Histogram

An image histogram is a type of graphical representation that shows how the intensities of the pixels of a given digital image are distributed.

In simple terms, a histogram will tell you if an image is correctly exposed, if the lighting is adequate, and if the contrast allows highlighting some desirable features.

A histogram is a graphical representation of how pixel intensity values are distributed in your image.
A histogram is a graphical representation of how pixel intensity values are distributed in your image (source).

We use histograms to identify spectral signatures in hyperspectral satellite images – such as distinguishing between genetically modified and organic crops.

We use histograms to segment parts on a factory conveyor belt – applying thresholding to isolate objects.

And looking at various algorithms like SIFT and HOG, the use of image gradient histograms is fundamental for the detection and description of robust local features.

Examples of pixels and their intensity values distributed in the histogram (source).

This technique is fundamental in digital image processing and is widely used in various fields, including the medical area, such as in X-ray and CT scans.

Besides being a fundamental tool, as it is very simple to calculate, it is a very popular alternative for applications that require real-time processing.

Formal Definition

A histogram of a digital image f(x, y) whose intensities vary in the range [0, L - 1] is a discrete function

    \[h(r_{k}) = n_{k},\]

where r_k is the k-th intensity value and n_k is the total number of pixels in f with intensity r_k. Similarly, the normalized histogram of a digital image f(x, y) is

    \[p(r_{k}) = \frac{h(r_{k})}{MN} = \frac{n_k}{MN},\]

where M and N are the dimensions of the image (rows and columns, respectively). It is common practice to divide each component of a histogram by the total number of pixels to normalize it.

Since p(r_k) is the probability of occurrence of a given intensity level r_k in an image, the sum of all components equals 1.

How to Calculate a Histogram

Manually calculating an image histogram is a straightforward process that involves counting the frequency of each pixel intensity value in an image.

For educational purposes, I have provided pseudocode and an implementation using numpy to demonstrate how to do this.

Pseudocode to calculate histograms.

First, we initialize a vector to store the histogram with zeros. In this case, I am considering an 8-bit grayscale image (2^8 = 256) possible values for each pixel intensity.

Next, we capture the image’s height and width attributes. If necessary, we could also include the channels attribute.

Then, we iterate over each pixel of the image, incrementing the corresponding position in the histogram vector based on the pixel intensity value. This way, at the end of the process, the histogram vector will contain the count of occurrences of each gray level in the image, providing a clear representation of the intensity distribution in the image.

Histogram Calculation in NumPy

The NumPy library has the np.histogram() function, which allows calculating histograms of input data. This function is quite flexible and can handle different bin configurations and ranges.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
numpy.histogram(
a: np.ndarray,
bins: Union[int, np.ndarray, str] = 10,
range: Optional[Tuple[float, float]] = None,
density: Optional[bool] = None,
weights: Optional[np.ndarray] = None
) -> Tuple[np.ndarray, np.ndarray]
numpy.histogram( a: np.ndarray, bins: Union[int, np.ndarray, str] = 10, range: Optional[Tuple[float, float]] = None, density: Optional[bool] = None, weights: Optional[np.ndarray] = None ) -> Tuple[np.ndarray, np.ndarray]
numpy.histogram(
    a: np.ndarray,
    bins: Union[int, np.ndarray, str] = 10,
    range: Optional[Tuple[float, float]] = None,
    density: Optional[bool] = None,
    weights: Optional[np.ndarray] = None
) -> Tuple[np.ndarray, np.ndarray]

See the example usage in the code below and note that when we use np.histogram() to calculate the histogram of a grayscale image, we set the number of bins to 256 to cover all pixel intensity values from 0 to 255.

However, it’s essential to understand how the library calculates the bins. NumPy calculates them in intervals such as 0-0.99, 1-1.99, up to 255-255.99. Practically, this means there will be 257 elements in the bins array (since we passed the argument 256 in the function).

As we don’t need this extra value, as pixel intensity values range between 0-255, we can ignore it.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load an image in grayscale
image_path = "carlos_hist.jpg"
image = cv2.imread(image_path, 0)
# Calculate the histogram
hist, bins = np.histogram(image.flatten(), 256, [0, 256])
# Compute the cumulative distribution function
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max() / cdf.max()
import cv2 import numpy as np import matplotlib.pyplot as plt # Load an image in grayscale image_path = "carlos_hist.jpg" image = cv2.imread(image_path, 0) # Calculate the histogram hist, bins = np.histogram(image.flatten(), 256, [0, 256]) # Compute the cumulative distribution function cdf = hist.cumsum() cdf_normalized = cdf * hist.max() / cdf.max()
import cv2
import numpy as np
import matplotlib.pyplot as plt

# Load an image in grayscale
image_path = "carlos_hist.jpg"
image = cv2.imread(image_path, 0)

# Calculate the histogram
hist, bins = np.histogram(image.flatten(), 256, [0, 256])

# Compute the cumulative distribution function
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max() / cdf.max()

Initially, we load the image carlos_hist.jpg in grayscale using the flag 0 in cv2.imread(image_path, 0). With the image loaded, our next step is to calculate the histogram. We use the np.histogram() function from NumPy for this. This function counts the frequency of occurrence of each pixel intensity value.

To better understand the intensity distribution, we will also calculate the cumulative distribution function (CDF). The CDF is essential for techniques like histogram equalization.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Plot histogram and C.D.F.
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
# Show the image in grayscale
axs[0].imshow(image, cmap="gray", vmin=0, vmax=255)
axs[0].axis("off")
# Plot the histogram and the CDF
axs[1].plot(cdf_normalized, color="black", linestyle="--", linewidth=1)
axs[1].hist(image.flatten(), 256, [0, 256], color="r", alpha=0.5)
axs[1].set_xlim([0, 256])
axs[1].legend(("CDF", "Histogram"), loc="upper left")
plt.show()
# Plot histogram and C.D.F. fig, axs = plt.subplots(1, 2, figsize=(10, 5)) # Show the image in grayscale axs[0].imshow(image, cmap="gray", vmin=0, vmax=255) axs[0].axis("off") # Plot the histogram and the CDF axs[1].plot(cdf_normalized, color="black", linestyle="--", linewidth=1) axs[1].hist(image.flatten(), 256, [0, 256], color="r", alpha=0.5) axs[1].set_xlim([0, 256]) axs[1].legend(("CDF", "Histogram"), loc="upper left") plt.show()
# Plot histogram and C.D.F.
fig, axs = plt.subplots(1, 2, figsize=(10, 5))

# Show the image in grayscale
axs[0].imshow(image, cmap="gray", vmin=0, vmax=255)
axs[0].axis("off")

# Plot the histogram and the CDF
axs[1].plot(cdf_normalized, color="black", linestyle="--", linewidth=1)
axs[1].hist(image.flatten(), 256, [0, 256], color="r", alpha=0.5)
axs[1].set_xlim([0, 256])
axs[1].legend(("CDF", "Histogram"), loc="upper left")

plt.show()

Now, let’s visualize the histogram and the CDF side by side using Matplotlib and setting up our figure with subplots.

Visually, just by looking at the image, we can see that it looks “flat“, with little contrast. This is indeed corroborated by the histogram on the right.

As most pixel intensity values are concentrated around 150, the image has a predominance of mid to light tones, resulting in a less vibrant appearance with little contrast variation.

Histogram Equalization for Grayscale Images

As we mentioned at the beginning of the article, histogram equalization is a technique to adjust the contrast of an image by distributing pixel values more uniformly across the intensity range.

There are several possible techniques, depending on the context, number of channels, and application. In this section, I will show you how to use the cv.equalizeHist() function to perform this process on grayscale images.

cv2.equalizeHist(
    src: np.ndarray
) -> np.ndarray:

The cv2.equalizeHist function equalizes the histogram of the input image using the following algorithm:

  1. Calculate the histogram ( H ) for src.
  2. Normalize the histogram so that the sum of the histogram bins is 255.
  3. Calculate the cumulative histogram:

        \[H'_i = \sum_{0 \leq j < i} H(j)\]

  4. Transform the image using ( H’ ) as a look-up table:

        \[\text{dst}(x,y) = H'(\text{src}(x,y))\]

To see how this function normalizes the brightness and increases the contrast of the original image, let’s run the code below.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Apply histogram equalization
equalized = cv2.equalizeHist(image)
# Create a figure with subplots to compare the images
plt.figure(figsize=(30, 10))
# Show equalized image
plt.subplot(1, 3, 1)
plt.imshow(equalized, cmap='gray', vmin=0, vmax=255)
plt.title('Equalized Image')
# Compare original and equalized histograms
plt.subplot(1, 3, 2)
plt.hist(image.flatten(), 256, [0, 256])
plt.title('Original Histogram')
plt.subplot(1, 3, 3)
plt.hist(equalized.flatten(), 256, [0, 256])
plt.title('Equalized Histogram')
plt.show()
# Apply histogram equalization equalized = cv2.equalizeHist(image) # Create a figure with subplots to compare the images plt.figure(figsize=(30, 10)) # Show equalized image plt.subplot(1, 3, 1) plt.imshow(equalized, cmap='gray', vmin=0, vmax=255) plt.title('Equalized Image') # Compare original and equalized histograms plt.subplot(1, 3, 2) plt.hist(image.flatten(), 256, [0, 256]) plt.title('Original Histogram') plt.subplot(1, 3, 3) plt.hist(equalized.flatten(), 256, [0, 256]) plt.title('Equalized Histogram') plt.show()
# Apply histogram equalization
equalized = cv2.equalizeHist(image)

# Create a figure with subplots to compare the images
plt.figure(figsize=(30, 10))

# Show equalized image
plt.subplot(1, 3, 1)
plt.imshow(equalized, cmap='gray', vmin=0, vmax=255)
plt.title('Equalized Image')

# Compare original and equalized histograms
plt.subplot(1, 3, 2)
plt.hist(image.flatten(), 256, [0, 256])
plt.title('Original Histogram')
plt.subplot(1, 3, 3)
plt.hist(equalized.flatten(), 256, [0, 256])
plt.title('Equalized Histogram')
plt.show()

First, we apply histogram equalization to the loaded image using the cv2.equalizeHist function. Next, we create a figure with subplots to analyze the final result.

The first subplot shows the equalized image, while the next two subplots compare the histograms of the original and processed images, using plt.hist to plot them directly. Notice how the distribution change made the image more visually appealing.

Histogram Equalization for Color Images

If we try to perform histogram equalization on color images by treating each of the three channels separately, we will get a poor and unexpected result.

The reason is that when each color channel is transformed in a non-linear and independent manner, completely new colors that are not related in any way may be generated.

The correct way to perform histogram equalization on color images involves a prior step, which is conversion to a color space like HSV, where intensity is separated:

  1. Transform the image to the HSV color space.
  2. Perform histogram equalization only on the V (Value) channel.
  3. Transform the image back to the RGB color space.
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Read the astrophoto
astrophoto = cv2.imread('astrophoto.jpg')
# Convert to HSV color space
hsv_astrophoto = cv2.cvtColor(astrophoto, cv2.COLOR_BGR2HSV)
# Split the HSV channels
h, s, v = cv2.split(hsv_astrophoto)
# Read the astrophoto astrophoto = cv2.imread('astrophoto.jpg') # Convert to HSV color space hsv_astrophoto = cv2.cvtColor(astrophoto, cv2.COLOR_BGR2HSV) # Split the HSV channels h, s, v = cv2.split(hsv_astrophoto)
# Read the astrophoto
astrophoto = cv2.imread('astrophoto.jpg')

# Convert to HSV color space
hsv_astrophoto = cv2.cvtColor(astrophoto, cv2.COLOR_BGR2HSV)

# Split the HSV channels
h, s, v = cv2.split(hsv_astrophoto)

First, we load a new color image using cv2.imread. Then, we convert the originally loaded image in BGR format to the HSV color space with cv2.cvtColor and separate the color information (Hue and Saturation) from the intensity (Value) using cv2.split(hsv_astrophoto).

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Equalize the V (Value) channel
v_equalized = cv2.equalizeHist(v)
# Merge the HSV channels back, with the equalized V channel
hsv_astrophoto = cv2.merge([h, s, v_equalized])
# Convert back to RGB color space
astrophoto_equalized = cv2.cvtColor(hsv_astrophoto, cv2.COLOR_HSV2RGB)
# Equalize the V (Value) channel v_equalized = cv2.equalizeHist(v) # Merge the HSV channels back, with the equalized V channel hsv_astrophoto = cv2.merge([h, s, v_equalized]) # Convert back to RGB color space astrophoto_equalized = cv2.cvtColor(hsv_astrophoto, cv2.COLOR_HSV2RGB)
# Equalize the V (Value) channel
v_equalized = cv2.equalizeHist(v)

# Merge the HSV channels back, with the equalized V channel
hsv_astrophoto = cv2.merge([h, s, v_equalized])

# Convert back to RGB color space
astrophoto_equalized = cv2.cvtColor(hsv_astrophoto, cv2.COLOR_HSV2RGB)

After applying histogram equalization only to the V channel, we merge the HSV channels back but replace the V channel with the equalized channel.

The redistribution of intensity values in the V channel significantly improved the contrast of the image, highlighting details that may have been obscured.

However, the histogram equalization we just saw may not be the best approach in many cases, as it considers the global contrast of the image. In situations where there are large variations in intensity, with very bright and very dark pixels present, or where we would like to enhance only one region of the image, this method may cause us to lose a lot of information.

To address these issues, let’s look at a more advanced technique called Contrast Limited Adaptive Histogram Equalization (CLAHE).

Contrast Limited Adaptive Histogram Equalization (CLAHE)

Contrast Limited Adaptive Histogram Equalization (CLAHE) is a technique that divides the image into small regions called “tiles” and applies histogram equalization to each of these regions independently.

This allows the contrast to be improved locally, preserving details and reducing noise. Additionally, the CLAHE method has the ability to limit the increase in contrast (hence the term “Contrast Limited”), preventing noise amplification that may occur in the regular technique.

cv2.createCLAHE(
    clipLimit: float = 40.0,
    tileGridSize: Optional[Tuple[int, int]] = (8, 8)
) -> cv2.CLAHE:

The implementation of CLAHE in OpenCV is done using the createCLAHE() function. First, a CLAHE object is created with two optional arguments: clipLimit and tileGridSize.

In this last example, let’s use a photo I took with my wife at the Chapada dos Veadeiros National Park in Brazil.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# Read the chapada image
chapada = cv2.imread('chapada.png')
# Convert to HSV color space
chapada_hsv = chapada.copy()
chapada_hsv = cv2.cvtColor(chapada, cv2.COLOR_BGR2HSV)
# Create a CLAHE object
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
chapada_hsv[:, :, 2] = clahe.apply(chapada_hsv[:, :, 2])
# Convert back to RGB color space
chapada_equalized = cv2.cvtColor(chapada_hsv, cv2.COLOR_HSV2BGR)
# Read the chapada image chapada = cv2.imread('chapada.png') # Convert to HSV color space chapada_hsv = chapada.copy() chapada_hsv = cv2.cvtColor(chapada, cv2.COLOR_BGR2HSV) # Create a CLAHE object clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) chapada_hsv[:, :, 2] = clahe.apply(chapada_hsv[:, :, 2]) # Convert back to RGB color space chapada_equalized = cv2.cvtColor(chapada_hsv, cv2.COLOR_HSV2BGR)
# Read the chapada image
chapada = cv2.imread('chapada.png')

# Convert to HSV color space
chapada_hsv = chapada.copy()
chapada_hsv = cv2.cvtColor(chapada, cv2.COLOR_BGR2HSV)

# Create a CLAHE object
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
chapada_hsv[:, :, 2] = clahe.apply(chapada_hsv[:, :, 2])

# Convert back to RGB color space
chapada_equalized = cv2.cvtColor(chapada_hsv, cv2.COLOR_HSV2BGR)

After loading the image chapada.png, we convert it to the HSV color space to maintain consistency. Then, we apply CLAHE to the V channel using the cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) function. Finally, we convert the image back to the RGB color space to visualize the result.


In the original image, there was a well-lit region (fence, tree, and me and my wife). If we had opted for global histogram equalization, the entire image would have been adjusted uniformly, potentially leading to a loss of details in very bright or very dark areas.

However, by using CLAHE, the algorithm used the local context, adapting the number of tiles to adjust the contrast of each small region of the image individually. This preserved details in both bright and dark areas, resulting in a final image with enhanced contrast and more natural colors.

Takeaways

  • Image Histogram: An image histogram graphically represents the distribution of pixel intensities, indicating whether an image is correctly exposed and helping to enhance hidden details.
  • Histogram Usage: Histograms are used for various applications, including the segmentation of parts on factory conveyors, identifying spectral signatures in hyperspectral images, and detecting robust local features with algorithms like SIFT and HOG.
  • Formal Definition: The histogram of a digital image is a discrete function that counts the frequency of each pixel intensity value, which can be normalized to represent the probability of occurrence of each intensity level.
  • Histogram Equalization for Grayscale Images: The cv2.equalizeHist() function in OpenCV adjusts the contrast of a grayscale image by distributing pixel values more uniformly across the intensity range.
  • Limitations of Global Equalization: Global histogram equalization may not be ideal in cases with large intensity variations, as it can lead to information loss in very bright or dark areas of the image.
  • Histogram Equalization for Color Images: When handling color images, histogram equalization should be applied to the intensity (V) channel of the HSV color space to avoid generating unnatural colors.
  • CLAHE – Contrast Limited Adaptive Histogram Equalization: CLAHE divides the image into small regions and applies histogram equalization locally, preserving details and avoiding noise amplification. This technique is effective for improving the contrast of images with large variations in lighting.

Cite this Post

Use the entry below to cite this post in your research:

Carlos Melo. “Histogram Equalization with OpenCV and Python”, Sigmoidal.AI, 2024, https://sigmoidal.ai/en/histogram-equalization-with-opencv-and-python/.

@incollection{CMelo_HistogramEqualization,
    author = {Carlos Melo},
    title = {Histogram Equalization with OpenCV and Python},
    booktitle = {Sigmoidal.ai},
    year = {2024},
    url = {https://sigmoidal.ai/en/histogram-equalization-with-opencv-and-python/},
}

 

Share8Share47Send
Previous Post

How to Train YOLOv9 on Custom Dataset – A Complete Tutorial

Carlos Melo

Carlos Melo

Computer Vision Engineer with a degree in Aeronautical Sciences from the Air Force Academy (AFA), Master in Aerospace Engineering from the Technological Institute of Aeronautics (ITA), and founder of Sigmoidal.

Related Posts

How to Train YOLOv9 on Custom Dataset
Computer Vision

How to Train YOLOv9 on Custom Dataset – A Complete Tutorial

by Carlos Melo
February 29, 2024
YOLOv9 para detecção de Objetos
Blog

YOLOv9: A Step-by-Step Tutorial for Object Detection

by Carlos Melo
February 26, 2024
Depth Anything - Estimativa de Profundidade Monocular
Computer Vision

Depth Estimation on Single Camera with Depth Anything

by Carlos Melo
February 23, 2024
Point Cloud Processing with Open3D and Python
Computer Vision

Point Cloud Processing with Open3D and Python

by Carlos Melo
February 12, 2024
Computer Vision

Building Rome in a Day: 3D Reconstruction with Computer Vision

by Carlos Melo
September 15, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Estimativa de Pose Humana com MediaPipe

Real-time Human Pose Estimation using MediaPipe

September 11, 2023
ORB-SLAM 3: A Tool for 3D Mapping and Localization

ORB-SLAM 3: A Tool for 3D Mapping and Localization

April 10, 2023

Build a Surveillance System with Computer Vision and Deep Learning

1
ORB-SLAM 3: A Tool for 3D Mapping and Localization

ORB-SLAM 3: A Tool for 3D Mapping and Localization

1
Point Cloud Processing with Open3D and Python

Point Cloud Processing with Open3D and Python

1

Fundamentals of Image Formation

0
Como equalizar histograma de imagens com OpenCV e Python

Histogram Equalization with OpenCV and Python

July 16, 2024
How to Train YOLOv9 on Custom Dataset

How to Train YOLOv9 on Custom Dataset – A Complete Tutorial

February 29, 2024
YOLOv9 para detecção de Objetos

YOLOv9: A Step-by-Step Tutorial for Object Detection

February 26, 2024
Depth Anything - Estimativa de Profundidade Monocular

Depth Estimation on Single Camera with Depth Anything

February 23, 2024

Seguir

  • Geo4D apresenta uma nova abordagem para reconstrução 4D monocular de cenas dinâmicas, reutilizando modelos de difusão de vídeo. 

🌀 A técnica dispensa sensores múltiplos ou dados reais — treinando apenas com dados sintéticos e generalizando bem em zero-shot. Isso é possível porque modelos de difusão capturam priors temporais e espaciais consistentes. 

O método prevê múltiplas modalidades geométricas: mapas de pontos, profundidade e raios. Em seguida, utiliza um algoritmo leve de alinhamento multi-modal para fundi-las de forma coerente. Esse processo acontece em janelas deslizantes, permitindo reconstruções 4D contínuas e robustas mesmo em vídeos longos.

Nos benchmarks, o Geo4D superou métodos SOTA como MonST3R em estimativa de profundidade e alcançou bons resultados em pose de câmera. Uma prova do poder de integrar visão computacional e modelos generativos. 🧠

Se curtiu a ideia, deixa seu like e fortalece o post!

Fonte: Zeren Jiang

#machinelearning #computervision #datascience
  • 📸 Reconstrução 3D do Arco do Triunfo com Gaussian Splatting, a partir de um único vídeo

A técnica usada é o Gaussian Splatting, uma abordagem moderna de renderização neural que substitui malhas e voxels por distribuições gaussianas no espaço 3D.

Esses pontos flutuantes carregam cor, opacidade e variância, permitindo uma renderização ultrarrápida e detalhada - ideal para aplicações em realidade aumentada, mapeamento urbano e digital twins.

Diferente dos métodos tradicionais, esse modelo ajusta diretamente os parâmetros das gaussianas, tornando o processo leve e eficiente, inclusive em tempo real.

📌 Fonte: Bohdan Vodianyk

#ComputerVision #VisãoComputacional #MachineLearning #GaussianSplatting
  • Você ainda acredita que resolver as top-150 questões do LeetCode é suficiente para ser aprovado em uma entrevista de Machine Learning Engineer ou Computer Vision Engineer?

Talvez já tenha sido… alguns anos atrás.

Hoje, no entanto, empresas que seguem o padrão de avaliação das FAANG - como Meta (Facebook), Apple, Amazon, Netflix e Google - vêm exigindo muito mais do que apenas conhecimento em algoritmos e estrutura de dados.

✅ Espera-se domínio em ML System Design
✅ Clareza ao comunicar trade-offs técnicos
✅ Experiência real em colocar modelos de machine learning em produção

Passar pela etapa de screening é só o começo.

Se você quer realmente se destacar, aqui estão 3 livros essenciais para estudar com estratégia! Arraste o carrossel para conferir a lista.

📌 Comente se você incluiria algum outro título.
📤 Compartilhe com um colega que também está se preparando.

#machinelearning #computervision #datascience
  • 🚀 NASA testa novo laser 3D para medir ventos e melhorar previsões meteorológicas

Desde o outono de 2024, a NASA tem utilizado um instrumento avançado chamado Aerosol Wind Profiler (AWP) para medir ventos em 3D com alta precisão.

Montado em uma aeronave especial, o AWP usa pulsos de laser para detectar velocidade e direção dos ventos, além da concentração de aerossóis (poeira, fumaça, sal marinho etc).

Esses dados são valiosos para modelos de Machine Learning aplicados à previsão do tempo, detecção de anomalias e simulação atmosférica.

📊 Oportunidades diretas para ML:

🔹 Treinamento supervisionado: previsões mais precisas usando dados reais de velocidade e direção dos ventos em múltiplas altitudes.

🔹 Modelagem de séries temporais: LSTMs e Transformers podem capturar padrões em sistemas complexos como furacões.

🔹 Data fusion: integração de sensores distintos (AWP, HALO, dropsondes) é um problema clássico resolvido com ML multimodal.

🔹 Assimilação de dados: ML pode atuar em tempo real para corrigir modelos físicos via técnicas híbridas (physics-informed ML).

Se você trabalha com IA, clima ou sensoriamento remoto, esse é o tipo de dado que muda o jogo!
  • Cada passo te aproxima do que realmente importa. Quer continuar avançando?

🔘 [ ] Agora não
🔘 [ ] Seguir em frente 🚀
  • 🇺🇸 Green Card por Habilidade Extraordinária em Data Science e Machine Learning

Após nossa mudança para os EUA, muitas pessoas me perguntaram como consegui o Green Card tão rapidamente. Por isso, decidi compartilhar um pouco dessa jornada.

O EB-1A é um dos vistos mais seletivos para imigração, sendo conhecido como “The Einstein Visa”, já que o próprio Albert Einstein obteve sua residência permanente através desse processo em 1933.

Apesar do apelido ser um exagero moderno, é fato que esse é um dos vistos mais difíceis de conquistar. Seus critérios rigorosos permitem a obtenção do Green Card sem a necessidade de uma oferta de emprego.

Para isso, o aplicante precisa comprovar, por meio de evidências, que está entre os poucos profissionais de sua área que alcançaram e se mantêm no topo, demonstrando um histórico sólido de conquistas e reconhecimento.

O EB-1A valoriza não apenas um único feito, mas uma trajetória consistente de excelência e liderança, destacando o conjunto de realizações ao longo da carreira.

No meu caso específico, após escrever uma petição com mais de 1.300 páginas contendo todas as evidências necessárias, tive minha solicitação aprovada pelo USCIS, órgão responsável pela imigração nos Estados Unidos.

Fui reconhecido como um indivíduo com habilidade extraordinária em Data Science e Machine Learning, capaz de contribuir em áreas de importância nacional, trazendo benefícios substanciais para os EUA.

Para quem sempre me perguntou sobre o processo de imigração e como funciona o EB-1A, espero que esse resumo ajude a esclarecer um pouco mais. Se tiver dúvidas, estou à disposição para compartilhar mais sobre essa experiência! #machinelearning #datascience
  • 🚀Domine a tecnologia que está revolucionando o mundo.

A Pós-Graduação em Visão Computacional & Deep Learning prepara você para atuar nos campos mais avançados da Inteligência Artificial - de carros autônomos a robôs industriais e drones.

🧠 CARGA HORÁRIA: 400h
💻 MODALIDADE: EAD
📅 INÍCIO DAS AULAS: 29 de maio

Garanta sua vaga agora e impulsione sua carreira com uma formação prática, focada no mercado de trabalho.

Matricule-se já!

#deeplearning #machinelearning #visãocomputacional
  • Green Card aprovado! 🥳 Despedida do Brasil e rumo à nova vida nos 🇺🇸 com a família!
  • Haverá sinais… aprovado na petição do visto EB1A, visto reservado para pessoas com habilidades extraordinárias!

Texas, we are coming! 🤠
  • O que EU TENHO EM COMUM COM O TOM CRUISE??

Clama, não tem nenhuma “semana” aberta. Mas como@é quinta-feira (dia de TBT), olha o que eu resgatei!

Diretamente do TÚNEL DO TEMPO: Carlos Melo &Tom Cruise!
  • Bate e Volta DA ITÁLIA PARA A SUÍÇA 🇨🇭🇮🇹

Aproveitei o dia de folga após o Congresso Internacional de Astronáutica (IAC 2024) e fiz uma viagem “bate e volta” para a belíssima cidade de Lugano, Suíça.

Assista ao vlog e escreve nos comentários se essa não é a cidade mais linda que você já viu!

🔗 LINK NOS STORIES
  • Um paraíso de águas transparentes, e que fica no sul da Suíça!🇨🇭 

Conheça o Lago de Lugano, cercado pelos Alpes Suíços. 

#suiça #lugano #switzerland #datascience
  • Sim, você PRECISA de uma PÓS-GRADUAÇÃO em DATA SCIENCE.
  • 🇨🇭Deixei minha bagagem em um locker no aeroporto de Milão, e vim aproveitar esta última semana nos Alpes suíços!
  • Assista à cobertura completa no YT! Link nos stories 🚀
  • Traje espacial feito pela @axiom.space em parceria com a @prada 

Esse traje será usados pelos astronautas na lua.
para acompanhar as novidades do maior evento sobre espaço do mundo, veja os Stories!

#space #nasa #astronaut #rocket
Instagram Youtube LinkedIn Twitter
Sigmoidal

O melhor conteúdo técnico de Data Science, com projetos práticos e exemplos do mundo real.

Seguir no Instagram

Categories

  • Aerospace Engineering
  • Blog
  • Carreira
  • Computer Vision
  • Data Science
  • Deep Learning
  • Featured
  • Iniciantes
  • Machine Learning
  • Posts

Navegar por Tags

3d 3d machine learning 3d vision apollo 13 bayer filter camera calibration career cientista de dados clahe computer vision custom dataset Data Clustering data science deep learning depth anything depth estimation detecção de objetos digital image processing histogram histogram equalization image formation job keras lens lente machine learning machine learning engineering nasa object detection open3d opencv pinhole profissão projeto python redes neurais roboflow rocket scikit-learn space tensorflow tutorial visão computacional yolov8 yolov9

© 2024 Sigmoidal - Aprenda Data Science, Visão Computacional e Python na prática.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • Home
  • Cursos
  • Pós-Graduação
  • Blog
  • Sobre Mim
  • Contato
  • Português

© 2024 Sigmoidal - Aprenda Data Science, Visão Computacional e Python na prática.