Today, however, those who believe in "science" are derided. Fine. What the alternative is, so?
The alternative is not treating science as a belief system, with priests and holy dogma, but instead treating it as what it is, a highly effective set of systems for developing knowledge. The people who talk about "believing in the science" are usually in the first category. They've chosen a set of prophets, treat their word as infallible, and react to anyone speaks against those words as a heretic, regardless of their qualifications or the soundness of their arguments. A good example of that is when people started posting on social media that their doctor had advised them against taking the vaccine, usually because of some pre-existing medical condition, and they were told they should get another doctor.
Exactly! Science is a process, not a product. The most that can ever be said for any product is: "We arrived at this result using the scientific method, and this result does not appear to be contradicted by additional observation." Modern "scientific" practice, however, tends to pick a particular outcome first, then gathers evidence to support it. This is especially true in cases where large sums of money (pharmaceuticals, medicine, publicly funded research) are dependent on the outcome of the research. It is also a function of the "publish or perish" construction of most research universities. Performing research that results in a negative or no conclusion tends not to get published. Replicating a previous experiment tends not to get published. Extraordinary results, even if unlikely to be accurate, get published.
https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
https://phys.org/news/2018-07-beware-scientific-studiesmost-wrong.html
It is not standard scientific practice to decide on a result, and then search for supporting evidence. It does happen, and you're correct it's often associated with grants from special interest groups, but it's at least a managed problem. It's why declaring all potential conflicts of interest is so important, for instance.
You're correct about the publication bias toward studies that have an interesting results, and against studies that show something doesn't work. Though since the replication crisis, there's been an active attempt to counter that, with many attempts to confirm or disconfirm significant studies, and a push to provide outlets where negative results can be published (there isn't a page limit in electronic journals).
But the biggest problem is just statistics. Let's say spontaneous combustion has become the leading cause of death in the world, and your research on luciferian pathways comes up with 1,000 different chemicals you think might help reduce the chance of bursting into flame. Let's assume that none of the chemicals actually work, but you don't know that. So you run 1,000 trials, where you give each chemical to a different group of people. And then you wait a few years, and check to see how many in each group died a horrible witch death. Using statistical analysis, you calculate a p value for chemical, and look for values < 0.05. That 0.05 threshold means there's only a 5% chance to come up with that result by mere chance. Except you tested 1,000 chemicals. 5% of 1,000 is 50. That means you'll find roughly 50 chemicals that appear to have a statistically significant chance of reducing autoholocausts... except we know that none of the chemicals work. So all 50 positive results are false positives.
A lot of studies work like that. For example, genetic research often tests every single gene, looking for correlations, which can end up with a lot of hot statistical garbage. That it happens has been verified by empirical tests. There are ways to compensate for it, but a lot of medical researchers aren't very good at it, and even when it's done right, yes, it's often difficult to conduct enough testing to be sure, especially since the strength of the signal is fairly low (which is often the case).