Thursday, January 14, 2021

A Moment in Dynamic Princing

TL;DR - Amazon: I can haz $22 plz?

I was sent an Amazon link to coffee. My price ($34.99) was very different from what the sender saw ($12.99) and what I could get directly from the merchant ($11).

What I saw on Amazon:

What they saw on Amazon:

What I found on the merchant's site (admittedly before shipping):

Tuesday, August 24, 2010

Bringing it back

All this discussion of programming methodologies, concepts, and design... I feel a little like I have gone off subject and I might be correct except that I am not. There is a recursion here that applies to the very suggestions that I originally began to discuss in the trenches of code and object design.

Conceptual clarity matters. Concept oriented programming is important and valuable. It has the capability of providing you with a better use of the constrained set of resources you have available for implementation. The approach does seem to require a higher level of up front investment but the facts are that over time and nearly immediately, the approach pays huge dividends (I should probably write a defense of this position). This actually is quite an important assumption that I make in much of my discussion: that a future exists and persistence of your work will proceed through time, especially if you've done your work well.

This is an admittedly big assumption to make. However, I will ask this: do you want it to be otherwise? If you do not take the approach I suggest, how many resources will you waste as time goes on, assuming you are wrong about your failure? Is that the sort of prophesy you'd like to fulfill? Is failure what you want to aim and plan for? I'd imagine not and more problematically, if your current effort ends in failure, you will have learned and reinforced behaviors at least to a degree that can have no other probable effect than increasing the probability that your future efforts result in failure. Life is cummulative and your life's time, energy, and efforts are yet further constrained resources.

Wow... there he goes again. This guy gets really off subject a lot. It's almost as if he enjoys the work but regards it as great and enjoyable hobby that works well in his life but regards that work as a subset of the entirety of his life and attempts to maintain a healthy fusion. Funny that.

Getting back to it... Looking back to my previous posts here, about leaking semantics and dishonest object advertisements, what I am talking about in those posts is that the conceptual unity and organization of the subject whipping-boy code examples. The actual concrete existence of such coding practices and implementations are a result of the lack of conceptually oriented development practices or a backing conceptual design. If the up front work of design and development had been performed and the work itself backed by a deep understanding of the task at hand, it is unlikely that such an implementation would have ever formed in the developers mind. We would not have had to plug our way through their ugliness and we would not have had to discuss why they were problematic. Instead, we could have been writing/reading a higher level discussion of the machinations of software, maybe even (if lucky) beginning to ascend out from the depths of such an existence and stepping into a clearer, more elegant, and more pleasant world. A funny metaphor about life itself, I suppose if you like the getting off track aspect of my writing.

Develop well.

Be human well.

For the rest of you:
Be sentient well.

Friday, August 20, 2010

Conceptual Design

So, implementation left to the reader...

Geez. The guy who writes this blog is a bit of a jerk, isn't he?

I'll at least discuss what I think is a precursor to conceptual development.

Specifically here I mean a manner of design, its form, its presentation and methodology. Often, in my experience, a design is started by the engineer that is creating it via creating artifacts like schemas, API definitions, algorithms, et cetera. This seems to me to be a form of jumping the gun. What I mean is that unless the concepts (maybe more formally the domain space) are clearly identified and defined as well as the relationships between those same concepts, the greatest truth of the design is left within the implicit space of the design. I suppose this implicit and poorly defined form of the concepts keeps the concepts and their definition safe from the scrutiny and dispute of others. This practice seems to me, however, to be a form of weakness and also a form of doing one's best to reduce the quality of the design itself. One of the problems with such an approach is that while the implicit position of the concepts keep them safe from access to individuals other than the designer that it also keeps them safe from access to the designer her or his self. The problem inherent with this situation is that the designer is then less able to scrutinize them her or his self. Such a situation means you, as designer, are less able to push the concepts for greater clarity and thereby less able to push them to produce the most optimal and efficient design that they can be made part of.

This relates to my earlier statement that our products can allow our users to work at a higher level, seeing relationships and patterns that would not otherwise have been visible to their seeing. In the same manner, if as a designer, you produce designs that keep you from working at a higher level with the concepts that you are organizing, you are keeping yourself from more clearly seeing the relationships and patterns that would otherwise be possible for you to view. As such, you reduce your capability to take advantage of the opportunities and efficiencies that might otherwise be available to you. You further reduce your capability and opportunity to ask important and revealing questions that you might otherwise not notice to ask about the system you are designing. Further, you might fail to see the relationships of the component or subsystem you are designing to the larger system that you are developing within or possible, if designing the entire system, the larger system, workflow, user scenario, et cetera into which your current system design fits. Any of these options only constricts the eventual value of the system you are designing.

Reducing your system's value in such a way can only serve to reduce your effectivity as an architect and system developer. It seems to me that this pattern applies not only to your designs but also to your life where we make efforts to be human well but that's a different post at least if not (and more probably) a different blog that doesn't yet exist.

This is not the end of the consideration because as we all know, all efforts occur within a resource constrained environment. Often this is seen as a detrimental factor of the developer's world. I disagree, though I did share that perspective at one time. The constraints of one's context are actually extremely important so far as I can tell. They help to guide you and help to form your work, making you more effective and more valuable. If you are not concerned with this, you are unreaching your potential and failing to manifest it optimally but I am again leaving the context and trappings of the limited form of machines that this blog is intended to discuss.

Getting back to concepts and conceptual design... By focussing in one's design upon the concepts that are present in the domain, the problem space of what is to be solved by the system you are designing, you allow yourself to ensure that you truly, deeply understand what you are trying to solve and what the unchanging parts of it are. As the constants and non constants of your problem space become defined, you will see what things must be decided and which require no further decision or at least require handling. Further, working on understanding the relationships of the concepts to one another will provide you with an understanding of the sorts of policies and behaviors that you will have to consider and define in you design. Further, understanding these policies and behaviors and defining them will allow you to see their commonality and patterns. As such, you will begin to develop an understanding of the cross cutting concerns of your system. These being defined, you will be better able to forumlate your understanding of these concerns and thereby be able to better formulate your design itself.

It's a funny thing that happens as I begin to procede through this process. I find that as I do so, the system itself, the manifestation and derivation of the concepts clearly forms in my mind. By developing a clear understanding of the concepts and defining them such that their definitions are clear and complete, the problems that might otherwise crop up later are surfaced and can be dealt with. This in turn provides you with the opportunity to solve them proactively at this point in the process which is a much more efficient time to do so since refactoring and revision or other forms of rework are not necessary to bring the system in to line with the needs of the domain. This is not to say that you will catch everything but the amount you don't catch should be reduced if you do your job of design well. Hopefully at least, the scope of your mistakes will be reduced. Either way, the implementation that you will be undergoing will end up being clear and to the point, requiring fewer lines of code and being better organized while hopefully also more maintainable.

Such an approach also has another important impact on your system. Developing the rich and deep understanding of the system as you have, when the inevitable time comes that you will need to revise or expand your system (which hopefully occurs since it is an indication you have truly created value) you will now be in a position such that you can evaluate at a concpetual level exactly what it is that the modification you are being asked to make is and exactly what effects it will have as well as what modifications it will require. By putting in your up front effort, you have reduced the likelihood of bugs, rework and the like, as well as the amount of work that will be required in the future. This is exactly the sort of thing that is the value that as engineers we are meant to provide to the world. More effective and more efficient use of the limited resources that are available to us.

Final note. I recognize, all of these things that I am saying are things that the object oriented folks have been talking about for quite some time. I do not mean to steal these ideas from them, claim exclusive ownership of them, nor exclude my appreciation to those individals for presenting these things in their discussions. In the end however, it seems to me that in most of their accounts, they have let these concepts remain as only implicit aspects of those discussions because they have maintained a focus on many of the concrete implementational details of their frameworks. Maybe it is simply a benefit of their work to clearly define the concepts that they have defined that I have been able to come to the intermediate conclusions that I have come to here.

Hmm... noting any kind of semblence and pattern here? ;)

It's all procedural. All the way down.

Given my last few posts, it's simple enough to recognize that I'm a proponent of the advantages that object and aspect oriented development methodologies can bring to our practice of writing software well. Often my arguments could be rationally and fairly be interpreted as being anti-procedural. Actually, I think the ideals and presentation of object oriented (and subsequent clarifications and expansions) development methods are somewhat overblown and fail to recognize one very fundamental and vital fact: it's all procedural, all the way down. I could perhaps restate this saying: it's all processor operations, all the way down. I could perhaps restate this again saying: it's all transistor physics in a limited turing machine model, all the way down. It starts to get a little too abstract going so far down, no matter how valid it is to do so. The object oriented code that you may write is executed as procedural code that is generally less efficient than the alternative procedural code that you could be writing.

It's entirely reasonable to ask then, why I can say this (not a particularly fresh thought) while also advocating for some of the more modern approaches to development. Despite my nerdy leanings and loyalties, it would be disingenuous for me to assert otherwise. Here is the (obviously not secret) reason: we write software and for many of us it is a passionate and satisfying calling but it is also a profession and a position that occurs within a larger organizational context. To be good, the software or more generally, the systems (which can span far outside of the Turing machine), we create must create value in our larger human organization. Largely, this has occurred through the creation of automation and by it (or many other means) greater efficiency which has the potential of freeing up resources that would be otherwise spent, allowing them to be used for arguably lower priorities or improvement of existing services that are none the less important to address. Other provisions of value allowed by the systems we create include the increase of quality in the end products created through creating improvements in accuracy or simply freeing up the focus of those using our systems so that they can concentrate more fully on the fundamental task a user is endeavoring to accomplish and allowing them to work at a higher level and thereby possibly encounter linked concepts that they might otherwise not have been able to place in juxtaposition. Alright... enough run on sentences about why I think the work we do is important and serves a real need within our society. To get back to the subject, the context in which we engineer is one where the actual value of our software is rarely judged by the resource allocators by the sophistication, elegance, or cleanliness with which it is designed. There are some very reasonable and unfortunate situational and organizational realities that cause this to be the case but that too is a different post. There are also some very reasonable and fortunate situational and organizational realities that cause this too to be the case but that too is part of the afore mentioned different post. The point of this post is little more than to point out that while we must, as our duty and obligation, tend to the technical correctness and functional provisions of our systems and that the driver of this most appropriately is the actual value created.

By this, I will have hopefully provided enough of a context in which the following will be naturally agreeable. Our efforts naturally and rightly take place (by providing us with feedback about our actual and less subjective provisions of value) within a context of a larger picture of value. While software written using objects must take advantage of constructs like virtual tables that keep track of the relative locations data and methods or in aspect oriented approaches can add executable instructions for all affected join points and by either method these approaches are less performant or at the very least create a larger amount of instructions that may not otherwise be executed, the costs of not employing these methods can be quite high. As such, these approaches to development have gained support from those that must take on the task of finding acceptable compromises that value certain efforts over others in the very real situation of limited resources that all of our work occurs within.

This value assertion of mine is not one that is largely argued in practice and so trying to convince anyone of this would be a waste of my effort and likely a waste of your time. Instead, I would like to try and build a well reasoned case for why this is the case and how exactly it is that they provide such value. (Geez. Will he ever get to it? Why... Yes!)

The difference between well and poorly written procedural code is the same as the difference between well and poorly written object or aspect oriented code or assembly. Newer approaches like object oriented programming and newer aspect oriented programming are available strategies within a practice that I like to think of as concept oriented programming. This term has been used by at many groups I could google (conceptoriented.org, www.drdobbs.com) for different things but while they seem to have the precursors or off-shoots of what I mean, they seem to fail on the same fronts as the OOP community has: I will explain, but first want to provide a little context. The difference, as I see it between well and poorly written code of any style is effective and clear organization. Because all code in the what I believe (maybe falsely) is the Dijkstra computing architecture is fundamentally procedural (even if that procedural code supports a different code model) what is important in our code is that we have effectively separated out and independently factored the concerns it addresses in a way that clearly identifies the concerns and clearly solves them in an understandable manner. What is really important about providing such clarity and division is that it can make it supremely clear exactly where a new feature/function or modification should occur and to what extent we intend it to have effect (which should be exactly the appropriate extent and no more). The organization must formalize concepts like scope of effect and access, general practices, repeated patterns, security, and much more so that in any one location nothing but exactly the concern being addressed (and nothing more) is represented and solved at exactly the correct scope for all instances of the concern. My notion of concept oriented programming in this way is quite different than procedural/OOP/AOP practices (maybe being a meta-practice in this conceptual context), it is a style and means of using each of those tool sets and methodologies in an effective, reusable, and transferable manner as opposed to being a language feature set or anything of the sort. By effective, I mean "solves the concern"; by reusable, I mean "clearly delineates between itself and other concerns" such that in any case where the concern must be addressed the same code can address it; by transferable I mean, "clear identifies itself as a solution to a specific concern (and what that concern is) as well as being simple to identify and understand (so it actually gets both used, used correctly, and used effectively)".

Just to be a jerk: implementation is left as an exercise to you, the reader.

Sunday, May 2, 2010

She's got a sweet cooling unit under that hood, it warms things better than any heater available!

In my last post, we ended with:

public class Foo {
   public List<Bar> Bars = new List<Bar>();
   public void MagicGetBarsMethod();
}
public class Bar {
   public void DoYourThing();
}
public class MyApp1 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foo.MagicGetBarsMethod();
      foreach(Bar bar in foo.Bars) {
         bar.DoYourThing();
      }
   }
}
public class MyApp2 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foo.MagicGetBarsMethod();
      if(foo.Bars.Count == 0) {
         throw new Exception("There were no Bars.");
      }
      foreach(Bar bar in foo.Bars) {
         bar.DoYourThing();
      }
   }
}

"But wait!" the highly-engaged (let's be clear, I have yet to earn a high level of engagement) among you are saying, "Foo is unclear!", and you are right! For everyone else (sometimes, I like to bully my readers through the use of literary devices), their objection is that the definition includes the method MagicGetBarsMethod, which presumably, based on its name is responsible for getting the appropriate Bar objects for the containing Foo object. However, it fails in communicating so many aspects of the behavior it contains (although they were probably obvious to the original writer in the context in which it was written). It makes obvious that it gets Bars from another place, but... Are the Bars it can retrieve always, in every case, a static collection? If not, upon what policy are the changes in the source collection applied to the Foo object's collection? Is it an actively synchronized collection such that the collection updates are propagated as a precondition of the final commitment of membership for the item being added? Is a notification sent after the final commitment of membership that the item has been added? Does the Foo object poll it's Bar source every N seconds to determine whether updates have occurred? Does the Foo object poll it's Bar source only when the consumer is interested in asking for the latest version of the collection (imagine for example that the collection is a set of tags, a specification of tab organization, data placed in specific tabs, and a description of how the tags should be displayed - a web page and your browser's refresh/reload button)? Are consumers of the Bar list allowed to add new or existing Bars into the list? If so, how is duplication handled? None of this is implied by the way that it presents itself to the world. This is a missed engineering opportunity... Worse, if I write it this way I am requiring everyone who uses the code to do unnecessary research and testing in order for them to be certain they are using my object correctly.
I am of course asserting a belief in the principles popularized (at least in my sequence of encounter) by the simple but no less elegant "The Design [previously, Psychology] of Everyday Things" (http://mitpress.mit.edu/catalog/item/default.asp?tid=5393&ttype=2) which usefully discusses things like which side of a door the handle to open it should be on relative to the hinges. To summarize, the design of an object should make it obvious beyond a doubt (intuitive, even) exactly how it works and how it is intended to be used. If these aren't the case, the engineer has not only failed but demanded that anyone who would or has to use his/her object should go through the pain and waste of time to guess at and verify what you were thinking and what you in fact did. Obviously, sometimes they can just read the code but this is not always an option and should never be a necessity.
Obviously I failed in the case of my Bar list. There is, of course (there always is...), a valid defense to this accusation: I didn't have an answer to the cluster of questions surrounding that collection at the time that I first wrote it. While this is the case, it still does not answer the question of why, if the retrieval of Bars is such a magical activity, the retrieval of Bars doesn't just happen magically and it is a failure of specification (whoever's responsibility it was to provide it). Consider the following modification:

public class Foo {
   private List<Bar> _Bars = new List<Bar>();
   private void MagicGetBarsMethod();
   public List<Bar> Bars {
      get {
         if(_Bars.Count == 0) { MagicGetBarsMethod(); }
         return _Bars;
      }
   }
}
public class MyApp1 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foreach(Bar bar in foo.Bars) {
         bar.DoYourThing();
      }
   }
}

So what's the big deal? In the absence of information relevant to Foo's retrieval of Bars, an artifact that implies some semantics (no matter how poorly) is inappropriate. By this change, the empty implication was removed and further, the use of the Foo object has been simplified. While I'm not claiming that it has been largely simplified, I have been able to eliminate a line of code from both of the "toy" applications and that means less time spent using my object, less need to understand how to correctly use my object, and fewer opportunities for bugs as a result of using my object. That one consideration for moving that one line of code pays for itself the first time someone uses my object and pays dividends on every use thereafter. It is an immediately profitable choice and if you make the other choice, you are (whether you mean to or not) explicitly deciding that all future development should be more costly across most of the vectors of resources.
Of course, the subtleties and realities of such an argument are often loss on the individual developer because it requires a little more up front work for them. The subtleties and realities are likewise often lost on the leadership of a software organization because they are rightfully and appropriately focussed on whether it is done and does what it is supposed to. Regardless, thousands of these decisions are made every day in an organization and they are made day after day after day. Over time, these decisions add up in a substantial manner and create vast differences in the productivity and effectiveness of development organizations. When it takes an hour or two to correct an off-by-one indexing error, that is the result of poor decisions, made over time, through the collaboration of all of the many interests of the organization. While it is quite accurate and important to note that these sensibilities I support are not the revenue drivers of an organization, that nor are they relevant to the users, that the long term reality is that they can have very important impacts on those very stakeholders and that making poor decisions at this level and across this aspect of our systems is an explicit decision to shorten the longevity of your product, to make your entire team less effective and productive, and to increase the negative pressures against your employment.

Tuesday, September 29, 2009

My Semantics are Leaking!

Why is it, that after having been educated in object-oriented methods; the associated advancements like aspect-oriented methods and tools; the intellectual evolutions and clarifications of practice related to the separation of concerns; and the demonstrated effectiveness of these methodologies in producing well-organized, performant, high-functioning, easy-to-use, easy-to-maintain, and easy-to-integrate-with systems that there are still people defending bug-prone, conceptually interlaced and redundant [explicitly] procedural approaches to solving complex system engineering problems?  It's not rocket science, just [basic] computer science...
For more info on...
This isn't about being academic and pure.  As engineers, our job is to create domain models (collections of related concepts) and to integrate disparate domain models.  That just means define concepts and how they relate to one another.  It's a lot like creating the parts of a car engine.  While the engine is physical our models are conceptual (bear with me...) they are both composed of assembled parts that interact with one another in well defined ways to effect changes in one another across multiple levels of assigned purpose. Because of the physical pieces in an engine, noone is troubled by the problem of confusing what components belong in the engine, the exhaust system, the audio system, the steering assembly, etc...  They are not tempted to co-locate unrelated groupings of the subsystems and components.  They have no problem with organizing those components in an orderly fashion.  On the other hand, people are constantly interlacing the concepts (which are much like the parts of our systems) of their models; they don't concern themselves with ensuring they are organized in an orderly and clearly understandable manner.  Let me give a simple example that proves my point and that I shall beat like a dead horse (<sarcasm>all the while feeling justified, because, hey, my example proves I'm right. Right?</sarcasm>):

public class Foo {
   public List<Bar> Bars = null;
   public void MagicGetBarsMethod();
}
public class Bar {
   public void DoYourThing();
}
public class MyApp1 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foo.MagicGetBarsMethod();
      if(foo.Bars != null) {
         foreach(Bar bar in foo.Bars) {
            bar.DoYourThing();
         }
      }
   }
}

Please ignore the lack of encapsulation, the poor naming, no synchronization, etc. That wouldn't be good in a real system, but as written the stripped down example is sufficient for the discussion. Its sufficiency is that there's a conceptual leak here: the conditional evaluating foo's Bars property for null. This is an example of the leakage of the Foo concept into the domain of the "application". It's not yet a severe leakage, but this is a clear distribution of the Foo object's semantics into the general context. The consequence of this is that every piece of code which uses the Bars property now needs to contain that code. Now every instance of the use of that property has to be kept in sync. The programmer been a lazy and selfish. the programmer has experienced a failure to fully encapsulate the semantics of the Foo object has resulted in N units of extra work (N being the number of times the Foo type is used) and less readable code to boot. So there's a complaint from a work and resource point of view, but what happens after the following is written?

public class MyApp2 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foo.MagicGetBarsMethod();
      if(foo.Bars != null) {
         foreach(Bar bar in foo.Bars) {
            bar.DoYourThing();
         }
      }
      else {
         throw new Exception("Bars was bad.");
      }
   }
}

Yeah, sure, I'm dramatizing... but then again I've seen this sort of thing in much more hefty situations. What has just entered into the conceptual landscape, by the action of the external parties is a divergence in the semantics of the Foo. In the one case, there's an encoded belief that Bar's being null is a normal occurance to be ignored as standard. In the other, there's the encoded belief that Bar's being null is an exception to the norm. This conceptual divergence is problematic in isolation but much more destructive in the large. Combinatorily, these small divergences create a great number of semantic combinations... you remember combinatorics... you're unlikely to find the factorial consequences in a real world system, but even the early stages are bad enough. Is there malice involved with these sorts of occurrences? Probably not; more often than not, it is the development of what had been a simple concept into a more complex or nuanced concept that is what causes such changes to be introduced. The problem is not that the concepts needed to be enriched, but only that the enrichment was performed outside of the concept that was expanding.
Being extreme? Yeah, that's me... There's an easy and excellent counter argument here: it's not the semantics of Foo that leaked and in fact there was no leakage at all. To the contrary, what we're seeing is the semantics of the two applications and how they regard and interact with the Foo concept. To state it in the other terms, the conceptual expansion didn't occur in the Foo concept, but instead occurred only within the App2 concept. For App1, seeing a null Bar isn't significant and means there's nothing to do while for the second, it's an expectation to the expectation that Bar always has something to give, that is one of the preconditions of the second app and the consumer of the application's asserted responsibility (that is, to only execute main when there are one or more Bars to be had). That would be appropriate. Still though, there's no reason for Bar to ever be null; if it's an exceptional situation for app 2 to get ahold of a Foo that produces an empty Bar, it can check the count of items in the collection and throw if the condition is violated. This allows for cleaner code and better separation of the concerns and responsibilities of the items involved:

public class Foo {
   public List<Bar> Bars = new List<Bar>();
   public void MagicGetBarsMethod();
}
public class Bar {
   public void DoYourThing();
}
public class MyApp1 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foo.MagicGetBarsMethod();
      foreach(Bar bar in foo.Bars) {
         bar.DoYourThing();
      }
   }
}
public class MyApp2 {
   public int Main(string[] args) {
      Foo foo = new Foo();
      foo.MagicGetBarsMethod();
      if(foo.Bars.Count == 0) {
         throw new Exception("There were no Bars.");
      }
      foreach(Bar bar in foo.Bars) {
         bar.DoYourThing();
      }
   }
}

My next post will discuss my miserable failure to engineer the Foo object with quality.

Friday, September 25, 2009

Intro

I complain about poor engineering. We are, each of us, kind of weak and prone to failure. I don't exclude myself in any way. However, regarding engineering, in person I've always given bits and pieces, trying to communicate my vision and support constructive behaviors and always work towards improvement and growth. While I recognize that a perfect state does not exist, I do strive for an attainable state that is infinitely approaching perfection. To approach it, we must have a notion, if not a concept, of what perfection is, however vague or incomplete. This is sufficient as long as it also is set in the trajectory of an infinite approach toward perfection. Unless it is passed and shared it seems this will occur less effectively. This blog will not do so and I am certain to be an insufficient steward, but what the hell... maybe I'll save some breath.