Code is dependent on data and models, and therefore on the abstractions used in them, so refactoring is inevitable today. Why? Usually, refactoring implies making the changes in your code needed in order to use data in a new way. We'll talk about the most common and least favorite type of refactoring that causes the snowball effect. It occurs when data models, table structures and business logic are altered.
Deep's philosophy describes everything by means of the Link concept. Any object is a Link, any relationship is a Link. The relationship always has specified
from
andto
fields. In a standalone object,from
andto
fields are not specified. This also distinguishes Deep's philosophy from graph databases, where edge cannot serve as an object of relationships.
You can always try to patch up backward compatibility issues by adding new adapters and layers. But this fight against symptoms will only postpone the consequences of the real problem. With every change in business logic, the onion effect will only grow. And there will be more abstractions that are intertwined with each other.
Many programmers will argue - this is a matter of clean code culture. We disagree. The culture of clean code effects only it’s implementation. The problem is not how we write the code, no matter how much we want to believe it. The problem is that it is fundamentally dependent on abstractions of business logic. Programmers are trying to fix the consequences of this problem, and not only fail in reducing the complexity of the code, but they only multiply it. All the while surrounding themselves with false senses of comfort and control.
Deep.Case aims to defeat this enemy. Data models can evolve without refactoring at all. How? In order to explain, we'll have to dig to the root causes of problems.
There are many types of code architecture. You can use GraphQL and schema generators, or you can map APIs and data abstractions through the ORM/ODM on your own, forwarding table rules to your code. You can have functional API and REST API from server. But, probably, in any case, the operating framework of these APIs will be defined either at the API level or at the level of the table’s column names. Thus, we compensate for the induced changes by paying developers to update the database, generators, resolvers and APIs. The problem here is the separation of the implementation of the functionality and its integration into the business logic. This one layer of abstraction is multiplied by the intersection of the business logic rules and the number of ways in which columns are connected, ultimately making the cost of each wave of refactoring highly dependent on the age of the project. It was not possible to accurately calculate the dependency factor, but this price always turns out to be a multiple times greater than the cost of only the field modification and behavior changes in the business scenario.
Here, you must also always take user permissions and any other business rules into account. This increases the cost of such waves of refactoring in multiples of the number of business rules. Deep.Case allows you to completely forget about this problem!
No matter how we indulge ourselves in the illusion of being able to predict everything, this is obviously not the case. Even the most ideal model will change. What we considered to be a single connection will become multiple. What we considered to be multiple will begin to be referenced from several places. Models conceived for use in some places become necessary to be applied in many others. Every such change today requires developer involvement, again and again.
With Deep.Case data is divided into:
WHAT
question, such as: payment, volume, need, moment, and otherHOW IS RELATED
question, such as: X wants Y, W watched Z, T owns R, T answers P