About 382,000 results
Open links in new tab
  1. What's the meaning of dimensionality and what is it for this data?

    May 5, 2015 · I've been told that dimensionality is usually referred to attributes or columns of the dataset. But in this case, does it include Class1 and Class2? and does dimensionality mean, …

  2. Curse of dimensionality- does cosine similarity work better and if …

    Apr 19, 2018 · When working with high dimensional data, it is almost useless to compare data points using euclidean distance - this is the curse of dimensionality. However, I have read that …

  3. What should you do if you have too many features in your dataset ...

    Aug 17, 2020 · Whereas dimensionality reduction removes unnecessary/useless data that generates noise. My main question is, if excessive features in a dataset could cause overfitting …

  4. What is the curse of dimensionality? - Cross Validated

    I cannot expound, but I believe I've heard what sound like three different versions of the curse: 1) higher dimensions mean an exponentially-increasing amount of work, and 2) in higher …

  5. Variational Autoencoder − Dimension of the latent space

    What do you call a latent space here? The dimensionality of the layer that outputs means and deviations, or the layer that immediately precedes that? It sounds like you're talking about the …

  6. dimensionality reduction - Relationship between SVD and PCA.

    Jan 22, 2015 · However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two …

  7. clustering - Which dimensionality reduction technique works well …

    Sep 10, 2020 · Which dimensionality reduction technique works well for BERT sentence embeddings? Ask Question Asked 4 years, 8 months ago Modified 3 years, 5 months ago

  8. machine learning - What is a latent space? - Cross Validated

    Dec 27, 2019 · In machine learning I've seen people using high dimensional latent space to denote a feature space induced by some non-linear data transformation which increases the …

  9. What does 1x1 convolution mean in a neural network?

    The most common use case for this approach is dimensionality reduction, i.e. typically M < N is used. Actually, I'm not quite sure if there are many use cases to increasing the dimensionality, …

  10. What're the differences between PCA and autoencoder?

    Oct 15, 2014 · Both PCA and autoencoder can do demension reduction, so what are the difference between them? In what situation I should use one over another?