 Scrubba Wash Bag is it worth it?
 Extracting Internal Visualization Information (PlotTheme, MeshRegion, FaceNormals, etc.)
 What is the best way to get words with a specific number of letters?
 Want to overlay a density plot with a grid of graphics
 NDSolve output where the stepsize becomes zero
 How can I get parallel evaluation of contour plots?
 Who has the winning hand?
 Autosomal genes expression occurs from both alleles simultaneously
 Proportion of amino acids from a random polymer of U and C
 How can I acquire the ability to digest seaweed?
 Why does semen smell like chlorine?
 Where's the actual Palindrome in the season two finale of Fargo?
 Info about Actress Julie Dreyfus
 As the Cold War expanded, what policy did the US implement to “contain” communism?
 What was the name of the man who was beaten by a crowd for tearing up a war poster during WWI?
 Since when (year/decade) is Jesus considered as god?
 When believers die, are they resurrected to heaven with the Lord?
 Bitcoin fake transactions
 Is it true I have to get Baptized to go to the Kingdom of God?
 Can this be a way to reconcile young earth creation with exploding stars 1 million light years away?
Naive Bayes Should generate prediction given missing features (scikit learn)
Seeing that Naive Bayes uses probability to make a prediction, and treats features as being conditionally independent of each other, then it makes sense that the model can still make a prediction given that there are some features missing in the test data.
I know that it is common practice to impute missing data, but why do this when Naïve Bayes should be able to make a prediction given that there are some features missing?
Can this be implemented in scikit learn? I tried a test set with less features, and got a ValueError as the shapes are not aligned.
So theoretically this is possible, but is it possible in scikit learn?
Your question is sensible. The way in which posterior probability is calculated in the classical Naive Bayes classifier (in sklearn) is like summation of the conditional probabilities of the all the features in the dataset. Even though the features are treated as conditionally independent, to learn the classification probability all the feat

Your question is sensible. The way in which posterior probability is calculated in the classical Naive Bayes classifier (in sklearn) is like summation of the conditional probabilities of the all the features in the dataset. Even though the features are treated as conditionally independent, to learn the classification probability all the features are always used in this setup. Once the model has been learned you still all those features to calculate the posterior for a new observation. The conditional independence is just an assumption that is taken to make the statistics and math obey the rules and work.
But slightly modifying the way in which the posterior is calculated you can use Bayesian approach to make predictions even with the absence of certain features. Using Bayesian approach to make predictions in the absence of certain features is still an ongoing work. You may want to have a look at this paper in which Bayesian approach is applied to astronomy to do classification with
20170320 23:57:20