The practice of public-funded science, like all public-funded activities, is under close scrutiny. Before the end of the cold war, it required and received little explicit justification; now, it requires a great deal. This justification is difficult because the public at large is ill informed about the process of science, and many if not most students of science arrive in graduate school without any introduction to the process, as opposed to the products, of science. Anyone who doubts that science is under attack must be unaware of the attempts of social scientists to define science as a consensual mirage visible only to those who agree on the rules of science (e.g., the "Sokal affair") or the attempts of "creation scientists" to gain curricular entry into public schools, most recently under the banner of "intelligent design." These efforts seem ludicrous to a person versed in the process of science but are difficult to argue against with those not so versed. The fact that scientific explanation -- sobered very much during Newton's overthrow by Einstein -- is always and forever tentative and open to alternative interpretation distinguishes it quite clearly from religious mimics (cf. the compelling research paper by an undergraduate who remains anonymous for fear of persecution by religious zealots). An effective test for religion masquerading as scientific authority is that alternative interpretations are not tolerated, let alone encouraged. I don't perceive any necessary conflict between spirituality and science, but they reflect awe of the natural world in different ways, and I like to know in which activity I am engaged.
One explanation of the scientific method is that it is a means of advancing knowledge by embracing and controlling (but never eliminating) uncertainty, by controlling and knowing where the uncertainty lies. Some individuals are drawn to organized religions because they cannot tolerate uncertainty. Some scientists, on the other hand, put on blinders about any phenomenon for which there is currently no mechanistic, scientific explanation. Simply because it cannot be explained "scientifically" at present does not mean that it will be inexplicable by future science. If it were so, there would be no purpose in present or future science. Good science and good humor have a lot in common. An objective scientist carries two or more competing explanations or meanings at the same time and has not committed to one over the other; humor often does the same (e.g., Groucho Marx' famous quip that "Time flies like an arrow; fruit flies like a banana.")
Science is best known to non-scientists through its products, published knowledge and technology. This knowledge comprises both data (observations) and theory. Most teaching of science to non-scientists consists of compendia of such knowledge. In many cases science students arrive in graduate school with an encyclopedic collection of it but almost no experience in how it is obtained. Laboratory courses might appear to teach the methods of science, but few give insight into how new knowledge is obtained because they treat situations where the correct answer already is well known, at least to the laboratory instructor. The process of science, on the other hand, entails connection of previously untested or insufficiently tested predictions in the form of theories or hypotheses with observations. Science as a practice is the means of gaining new knowledge about the operation of the natural world.
Philosophers rather than scientists are better known for articulating how science should be done (arguably the domain of philosophy or ethics) and even how it is done (arguably better left to scientists). If you are or wish to become a scientist, you need to know and to be able to articulate how science is done (at the very least by you). What follows is a brief outline of my position. Don't accept or adopt it; rather use it as a stimulus and beginning source of references to formulate your own. Part of what makes science fun is that authority should be questioned and overthrown when logic or observation rules against it. Trust your logic and observations more than you trust my words, and please let me know where you find departures.
Science -- as popularized -- is largely deductive, based on "if-then" statements ("if the world works like X, then we should see Y"). A useful analogy is with a complex game: You get to make limited observations and try to guess the rules. Scientists try to guess, from limited observations, the rules by which nature operates. As opposed to mathematics and the jargon of law, nothing can be proved or disproved in science, only rejected if its predictions are falsified by data. Even then, theories are not rejected until a better theory comes along. Ideas once rejected can be resurrected, but the methodologies of science and conventional statistics (that control type I or a error) intentionally make rejection of true ideas difficult or at least unlikely. Rather than being "negative" in any pejorative sense, this philosophy of looking for ways to discard bad ideas recognizes (quite constructively) that some more clever or more complicated explanation devised in the future may fit these data and new ones better. For that reason, and demarcating science most clearly from religion, science and its explanations are always regarded as tentative.
There are many good guides to the practice of deductive science. Karl Popper's (e.g., Popper 1965) are among the finest. Particularly good and short introductions are provided by Platt (1964) and Box (1976). I try to reread these two papers at least yearly to keep from getting stuck in an unproductive rut. Box (1976, p.792) offers a nice counterpoint to the proliferation of overparameterization in quantitative modeling: "Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad." Both of these references stress the need for close coupling of predictions and tests to prevent useless collection of data or equally useless theorizing: Theories guide the collection of data, and data dictate which branches of theory to prune. Breaking the feedback loop between theory and observation and proceeding with one alone is deadly. Another point made well in each of these three references is that a hypothesis that can be recognized as irrefutable before testing is worse than useless. Since science advances by rejecting seemingly plausible hypotheses in favor of others, it is not worth spending time on a hypothesis that cannot be endangered. Lewontin (1970) offers some excellent criteria to identify situations in which a hypothesis cannot conceivably be rejected and therefore should not be pursued.
These papers are inspiring and useful about some points, but do not in my opinion accord a strong enough role to the creative (theory-constructing) part of science. To my knowledge, Sir Karl Popper never told his readers how or where theories arise, only how to slay them effectively. The treatment that rings truest to me is Lakatos' (1970). Lakatos (1970), who was an economist-philosopher and a keen observer of science, compared successful with unsuccessful research programs, including the same laboratory at times when it was considered successful and unsuccessful. What best separated progressive from degenerating research programs was an emphasis on prediction ("excess empirical content"). Degenerating research programs made up ad hoc explanation of results, often explaining away anomalies that disagreed with an entrenched theory. Kuhn's (1979) history of scientific revolutions is better known and certainly is fascinating reading, but it provides little guidance on what a scientist should do from day to day when a revolution is not raging, save be a closet anarchist.
My position is that what divides science from non-science is the making and testing of predictions based on, or at least working explicitly toward, mechanistic understanding. If I find myself neither working from nor toward prediction, I change what I am doing. Some ecologists like to argue that understanding of mechanism is not feasible yet in complex biological systems and therefore should not be pursued. Their argument would be less hollow if one could find a celebrated success in science that did not involve insight into mechanism. To the contrary, Darwin is a giant because he injected mechanism into evolution, and molecular biology advances rapidly because it ferrets mechanism. Political science is not science. By my strict definition, parts of ecology and oceanography are not science. Complex, nonlinear systems with abundant feedbacks and no clear steady states (e.g., ecosystems) are admittedly tough to study scientifically. The most important distinction between science and "pseudoscience" is whether one is working on or at least toward mechanistic understanding. To remove some emotions (and add others), consider the economy. Economists are pretty good at post hoc explanations of what happened. Patterns of bull-and-bear alternation occur in past data. Economic predictions and mechanistic understanding of what will happen when an economic "knob" is "tweaked," however, are so rudimentary as to escape my definition of science. Beware the temptation to confuse prediction from mechanistic understanding with prediction from statistical pattern in past data, especially when investing your own funds ("Past performance is no guarantee..."). Statistical regularity is a great source of inspiration for generating hypotheses about mechanism, but don't mistake statistical regularity for an understanding of mechanism. Beware of vapid prediction that when distilled says no more than that I will see again what I saw before. Can you explain only old results (post hoc), or can you predict new results (a priori) that were not anticipated before understanding the mechanism?
Lewontin, R.C. 1970. The Genetic Basis of Evolutionary Change. Columbia Univ. Press, NY. 346 pp. Dynamic sufficiency and tolerance limits of models vs. empirical sufficiency (of measurements) clearly outlined on pp. 6-12