And while doing that, you need to be aware of the differences between human-spoken logic and computer-spoken logic so that you don't accidentally condemn Tom to only finding happiness when watching football while eating pancakes.
As computer language consists of both Propositional Logic assumes that the world contains facts and First Order Logic assumes that the world contains objects, relations, and functions , one may argue that developers are well-equipped and computer language is all that is needed to enable them to write any algorithm and conditional statement rule needed to translate a human requirement into code.
We argue that it isn't, primarily because of three major difficulties that stand in the way of developers — the first difficulty is brought about by the complexity of the logic, as we will see below. The second and third difficulties brought about by time and uncertainty will be covered in follow-up posts. So now, let's look more closely into the process of building software applications using computer logic made up of conditional constructions.
Boolean Algebra is the language of mathematics and machines the equivalent of Propositional Logic and has precise and well-defined constructions or "machine words" that make up its vocabulary.
For instance, De Morgan's Law says the following: the negation of "a and b" is equivalent to "not a or not b," while the negation of "a or b" is equivalent to "not a and not b. Now, imagine a software program in which multiple statements using Boolean Algebra are joined together. The longer the conditional statements are, the harder it is to test their validity by reading the code alone.
Chaining a couple of these statements together makes it very hard to verify the intended logic. The difficulty of verifying intended logic can also be measured by a metric called cyclomatic complexity , which Thomas McCabe came up within Cyclomatic complexity is a quantitative measure of the number of linearly independent paths through a program's source code.
Even though its usefulness as a measure of software quality has been questioned, in general, in order to fully test a module, all execution paths through the module must be exercised. This also implies that a module with higher complexity is more difficult to understand since the programmer must understand the different pathways and the results of those pathways.
I've run into a few cases where people have made use of rules engine products, and each time things don't seem to have worked out well disclaimer: I'm not a statistically valid sample. Often the central pitch for a rules engine is that it will allow the business people to specify the rules themselves, so they can build the rules without involving programmers.
As so often, this can sound plausible but rarely works out in practice. Even so, there's still value in a BusinessReadableDSL , and indeed this is an area where I do see value in this computational model.
But here too lie dragons. The biggest one is that while it can make sense to cast your eyes down a list of rules and see that each one makes sense, the interaction of rules can often be quite complex - particularly with chaining. So I often hear that it was easy to set up a rules system, but very hard to maintain it because nobody can understand this implicit program flow. This is the dark side of leaving the imperative computational model.
For all the faults of imperative code, it's relatively easy to understand how it works. With a production rule system, it seems easy to get to a point where a simple change in one place causes lots unintended consequences, which rarely work out well.
It's not a Panacea for business layer so should be used wisely. To elaborate more on the point, say a Loan Granting Application where the rules are limited and they don't change over time or neither new parameters or fields get added in such kind of applications. What can change is the interest rate which can be read from a static final field. Second example, say a Library Managing Application, while designing the application there are fixed set of rules and once implemented they don't change that frequently or either change at all.
Hence in such kind of system its advisable not to use Drools as it wont be of that benefit with respect to performance and the control. Learn Core Java. In a similar post, I explain how you can use machine learning with Akka. Data is the most important thing in machine learning. Your model is only as good as your data. You want as much data as possible and that may include both batch and real-time data sources.
Features are the inputs into models and some ML Platforms provide capabilities for you to create those features. Others provide capabilities that can automatically generate the features for you. These are the different algorithms that can be used in a machine learning model. One of the most important aspects of managing a machine learning model is monitoring it for accuracy. A common fallacy with machine learning is that a ML model never needs to be retrained as it can learn itself.
That is not the case as machine learning models have to be re-trained every so often as the data they are trained on starts to drift from the data they are executing against in production. By comparing the capabilities of machine learning platforms with rules engines we can now see how there are similarities along with differences at the capability level.
Comparison between rules engines and machine learning platforms. So how do we make the decision of when to use a Rules Engine or Machine Learning? Rules are a good fit in the situation where:. In summary, leverage rules when you need precision and know the logic. But is it always as clear cut as that? What if you wanted to use the power of both? The answer is you can. There are a number of hybrid patterns where you can use machine learning and rules together to determine an outcome.
Imagine the use case where you are a realtor wanting to provide the best guidance to your clients on purchasing a home. In this pattern, two different machine learning models execute. One determines the probability of a house selling in 10 days. Another determines the probability of the sellers dropping the asking price. Both of these predictions are an input into rules. The rules then evaluate the output of the model and ultimately provide a recommendation to the realtor.
Pattern 1: leverage machine learning output as an input into rules. In this pattern, we start with the rules being the input into the machine learning models. Rules execute business logic to determine boolean based values. Does the house need repairs? Is it the selling offseason? Do the sellers want to get rid of the house and sell it now?
The output of these rules then are features into the machine learning models. The machine learning models then provide a probability back to the realtor of the house selling in 10 days and the sellers dropping the price.
0コメント