Presentation is loading. Please wait.

Presentation is loading. Please wait.

- Dijkstra, A discipline of programming, 1976

Similar presentations


Presentation on theme: "- Dijkstra, A discipline of programming, 1976"— Presentation transcript:

1 - Dijkstra, A discipline of programming, 1976
“To my taste the main characteristic of intelligent thinking is that one is willing and able to study in depth an aspect of one's subject matter in isolation, for the sake of its own consistency, all the time knowing that one is occupying oneself with only one of the aspects. ... these next nine slides (repeated 11 times) aren’t part of the talk per-se, they are part of the preface to it. in the time before the talk starts, the slides are shown on the screen, going from one to the next in a loop. the purpose is to set the theme for people, of separation of concerns, of what that is about, etc. I want to make clear that it is an old idea, that it is not specific to the world of software, that there are different ways to do it… - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

2 … The other aspects have to wait their turn, because our heads are so small that we cannot deal with them simultaneously without getting confused. This is what I mean by ‘focussing one's attention upon a certain aspect’; it does not mean completely ignoring the other ones, but temporarily forgetting them to the extent that they are irrelevent for the current topic. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

3 … Such separation, even if not perfectly possible, is yet the only available technique for effective ordering of one's thoughts that I know of. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

4 … I usually refer to it as ‘separation of concerns’, because one tries to deal with the difficulties, the obligations, the desires, and the constraints one by one. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

5 … When this can be achieved successfully, we have more or less partitioned the reasoning that had to be done — and this partitioning may find its reflection in the resulting partitioning of the program into ‘modules’ — but I would like to point out that this partitioning of the reasoning to be done is only the result, and not the purpose. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

6 … The purpose of thinking is to reduce the detailed reasoning needed to a doable amount, and a separation of concerns is the way we hope to achieve this reduction. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

7 … The crucial choice is, of course, what aspects to study ‘in isolation’, how to disentangle the original amorphous knot of obligations, constraints and goals into a set of ‘concerns’ that admit a reasonably effective separation. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

8 … To arrive at a successful separation of concerns for a new, difficult problem area will nearly always take a long time of hard work; it seems unrealistic to expect otherwise. ... - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

9 … The knowledge of the goal of ‘separation of concerns’ is a useful one: we are at least beginning to understand what we are aiming at.” - Dijkstra, A discipline of programming, 1976 last chapter, In retrospect

10 slides, papers and system at www.parc.xerox.com/aop
goal of this talk discuss the implementation of complex software systems focusing on issues of modularity how existing tools help achieve it propose a new tool to help improve modularity in some cases where existing tools are inadequate What I want to talk about today is modularity in complex software systems. I’m going to be talking about the conceptual and implementational tools we use in building those systems, and how those tools help us to achieve the “good modularity” that we know is so important. I believe our current tools have been very successful. We can today build a number of extremely impressive systems far more easily than would have been possible even ten years ago. But I also believe that we should be able to do even better. Much better. Many systems are still far harder to build than I think they should be. There are a number of problems where even if we use the best existing technology, the complexity of the system is difficult to manage, it is difficult to get good modularity in it, and as a result the system is too difficult to develop and maintain. And so what I’m going to do in this talk is to try and analyze some of those problems, and to propose a new tool [conceptual and technical] that I hope can be helpful. I hope that this tool can make a number of systems easier to build. Before starting, I want to note that the material in this talk builds on bits and pieces of a lot of prior work. We believe that our group has developed the first spanning framework for thinking about building systems this way, but its clear that there are many systems out there that have some of these properties. In fact, I hope that as you listen to this talk you will recognize some of the problems and solutions you’ve worked with in the past and, in that way, that this talk will help you to work with those ideas in the future. ——— old So this is a talk about a lot of things -- se, pl, slides, papers and system at

11 part i -- sharing context
format of this talk sharing context a problem and an idea our current instantiation of the idea implementation summary and hopes the talk is organized into 5 parts - first I want to lay out about some of my basic assumptions about software engineering and the role that SE research has to play in helping practicing software engineers - then I will use an example to show first the strengths and then the weaknesses of current state of the art. I’ll use that same example to propose our new idea - then I will present our current instantion of this new idea. This is an extension to Java™ that is available for public use. - talk a little bit about implementation issues - summarize, and tell you where I hope AOP will be going in not just th next 5-10 years but also beyond that part i -- sharing context part i -- sharing context

12 I sharing context part i -- sharing context
So now I want to kind of go back to first principles and lay out some of my assumptions about the basic challenges of software engineering. Forgive me if these next few slides seems a bit basic—I know that they are. But experience has shown that it helps to present our new idea if we first establish a shared way of talking about the basic principles. ——— notes part 1 has to say: separation of concerns is the strategy doing a good job of that when soc is manifest in design and implementation issues are well localized handling of them is explicit introduce notion of component… the “component concept” has to show what it means for “component” to be an concept that carries all the way through the design and implementation (to lay the foundation for “aspect” doing the same) component, component implementation, aspect, aspect implementation part i -- sharing context

13 the engineering challenge
extremely complex systems more than our mind can handle all at once must manage the complexity as I see it, when we build software systems we are engaged in an engineering discipline; and, as I see it, the primary challenge of all engineering disciplines is the same: - we are working with extremely complex systems, and the nature of our brains is such that we can’t think about all that complexity at once. - so we must find some way of managing that complexity so that we can have a chance at actually working on the problem part i -- sharing context

14 problem decomposition
break the problem into sub-problems address those relatively independently All engineering disciplines, including computing, have adopted the same basic strategy for doing so. They find a way to break complex problems down into smaller sub-problems, and then break those down into smaller sub-sub-problems, and so on, until the problems reach a manageable size. The idea then is that this smallest problems can be worked on in relative isolation. ——— old stuff below here has to talk about decomposition, and sub-decomposition part i -- sharing context

15 solution construction & composition
construct complete systems from the designs by implementing the sub-parts, and composing them to get the whole we then construct solutions to the smallest sub-problem—sub-systems—and then compose those together with other sub-systems to build up to the complete system we want ——— other notes below here and some notion of “interface” this next part has to be very clear about setting the way in which the language composition mechanisms “fit” the design decomposition — this lays the foundation for the point about how cc with one composition mechanism leads to tangling at the heart of the talk part i -- sharing context

16 design & implementation
decomposition breaks big problems into smaller ones composition builds big solutions out of smaller ones so in design and and implementation we have decomposition and composition working together in a synergistic relationship like this: - design breaks big problems down into smaller ones - implementation builds little systems and composes them together into bigger systems So that’s a nice story, we’ll break big problems into little ones, solve them, and glue all the solutions together. But all by itself it isn’t enough. In order for it to work, we have to find a way to break the big problem into the “right pieces”. That breakdown into the right pieces is what we are talking about when we say things like “good modularity”, or “clean separation of concerns”. ——— other notes below here in our discipline programming is how we build the subsystems. we’ve got various programming mechanisms that allow us to implement the subsystems and then compose them together to build back up to the large system we are interested in the process of design and implementation works through a mutual synergy between problem decomposition and solution construction&composition. the design and implementation go well when the decomposition and composition fit each other part i -- sharing context

17 “clean separation of concerns”
we want: natural decomposition concerns to be localized explicit handling of design decisions in both design and implementation On this slide I’ve tried to capture some of the most important things I think we mean when we say good modularity or clean separation of concerns. [GO SLOWLY HERE] - natural, in that we want the decomposition to fall along the lines we would like to think in terms of. We want the pieces to be the pieces we would … - localization to be clean, in that we want individual design concerns to be well contained into one of the units of the breakdown, as opposed to being spread among several of the units. - design decisions to be handled explicitly, in that we want all the important design decisions we make to be addressed explicitly in some module rather that being an implicit property of the overal structure - and we want all of the above properties to hold true in both the design and the actual source code This is a lot to ask. To work on an extremely complex systemand to ask that the solution have all these properties is a tremendous lot to ask. But ultimately, I believe this is what practicing SEs need, and this is what we as software researchers need to help them achieve. (Put another way, the closer practicing SEs can come to this picture, the more complex a system they can build.) So remember this picture. This is what I’m taking the goal to be. [For all software engineering research!] At the end of the talk the question will be whether the ideas I’ve presented can contribute to achieving this. part i -- sharing context

18 achieving this requires...
synergy among problem structure and design concepts and language mechanisms “natural design” “the program looks like the design” So what does it take to do this? It seems to me that one of the keys to achieving this is to have two simultaneous synergies: - a first synergy between the concepts we use to decompose the problem and the nature of the problem itself (this leads to the a natural decomposition) - a second synergy between the design concepts and the implementation mechanisms used (this leads to the code looking like the design) design breakdown alinging with the natural concerns of the system - requires that the programming mechanisms we use to implement the design fit those design concepts in a natural way when we have the first, we can get “natural designs” (the design feels good); when we have the second, we can get “the program looks like the design” when we have both we get the clean separation of concerns all the way through the design and the implementation -- we get the goal I set out before [PAUSE] ——— old stuff below here [for DSL: say where the design fits the problem and the code fits the design --- the code fits the domain, that’s the ideal of DSL, or DSL are explicitly about the 2nd synergy and implicitly about the first one] each independent concern is addressed by an independent program component. and the composition of the program units matches the decomposition in the design part i -- sharing context

19 the “component”1 concept
a modular unit of functionality fits many natural design concerns well-supported by existing programming technology a rich collection of design principles, conventions and notations programming mechanisms now I think we’ve had this nice double synergy for some time, based on our concept of “component” — in this talk, when I say “component” what I mean is a modular unit of functionality (something that could be implemented as a procedure, a class, an RPC interface etc.) [TALK THROUGH EXAMPLES EXPLICITLY] the component concept fits many of our natural design concerns very well, so for example in a banking application, can have a component that computes compound interest, a component that prints bills, a component that…; and in an operating system we can have a component that manages the windows, and a component that manages the files, and a component that manages the memory… and in fact we have developed a rich design practice of breaking design problems down into components. there’s a lot of published work in the SE community and elsewhere that talks about that we also have programming mechanisms cleanly fit the component concept--- interfaces, APIs, RPCs and the languages that sit behind them… part i -- sharing context

20 object-orientation “objects” used in design and implementation
object-oriented design object-oriented programming many tools to bridge the gap these days objects are an important technology. and I want to be clear that they fit into the notion of component as I’m using it here today. objects are a modular unit of functionality. associated with object technology we have more specailized design methods for breaking systems down into objects, and specialized implementation technology (OOP) for implementing the designs in terms of those objects I think this dual synergy picture is a nice way of thinking about what’s going on with object technology. the object concept is better fit to some problems than the more general component concept. and, OOP is a better fit to an object based design than more general procedural programming. So, what OO does, in the cases where it works well, is to tighten those dual synergies, to get a better version of the goal picture. That’s why it has been so successful! (again when it works well) ——— [For DSL: again, DSL is all about tightening that even further.] objects objects part i -- sharing context

21 summary so far good separation of concerns in both design and implementation complex systems design practices support decomposition programming languages support implementation and composition of components decomposition into components So here’s a summary of what I said so far (use the animation!) - as engineers, we are dealing with systems that are more complex than our minds can handle - we want to break them down into sub-problems we can focus on in relative isolation -- clean separation of conerns (the four properties from before:, natural problem decomposition, clean localization, explicit handling and capture of design decisions, and the code looks like the design) - to be able to achieve that we need dual synergies, between the design concepts we use to decompose the problem and the problems itself; and between those design concepts and the implementation mechanisms we use - we’ve had that dual synergy for some time around the concept of component… it fits many of our design concerns, we have implementation technology that fits it [PAUSE] Now what I think comes out of this picture is that good modularity comes from having the “right thing” here (inside the red oval). So what I’m going to do in the rest of the talk is explore whether the concept of “modular unit of functionality, and the implementation technology that supports it (proecudure, object etc.) is really the best and only thing to put into that space. You are going to see me propose some ideas that will appear to violate modularity. They will certain violate some of our existing rules of thumb with regard to modularity. But all I’m really trying to do is push on the part of this picture inside the red ovals. I like modularity. I’m tryin to make it better. All I’m asking is whether we might be able to make it better by making some significant changes to what we use as fundamental building blocks. programming languages support implementation and composition part i -- sharing context

22 II a problem and an idea part ii -- a problem and an idea
What I want to do now is use a simplified example of a real system to show: - first the strengths of the current state of the art, - then some weaknesses, - analyze those weaknesses, - and propose our new tool Despite the fact that this example is simplified, I hope you will be able to see the real issues in it, and relate it to problems you have had in large systems. part ii -- a problem and an idea

23 a distributed digital library
The example is a digital library. A system that hold collections of “digital books”, allow users to access them, search them, print them etc. So, for example, … [walk through the THREE animations carefully] * might send a message to a library looking for a book, and get the book back * might send … library might not have the book, so it would delegate to another library, which does have it, and get the book back * once user has the book, might examine its title and author, and then decide to print it out part ii -- a problem and an idea

24 the component structure
use objects objects are a natural fit for this system so… the design breaks down into component objects implement using OOP so in the design for this system I’ve chosen to use objects for the component model that means that during design I break the system down into objects which the implementation then implements ——— old stuff below here have to have said component by here! part ii -- a problem and an idea

25 the class graph title: string author: string isbn: int pdf: pdf
tiff: tiff Library Book holds * cooperates User accesses Printer uses [TALK THROUGH THIS SLOWLY] so here’s a simple design (that I’ve sketched out using something like UML). It has four kinds of objects, ~ ~ ~ ~. * a library can hold any number of books * libraries can cooperate with each other * users access libraries and printers * printers pay for the plane ticket The details of this design aren’t critical. What I want to stress about it is that it has nice separation of concerns: * what a book does is localized in one place, same for user, library and printer * the breakdown is natural -- it fits the natural separation we want to think about the system’s functionality in terms of and the code is nice too... part ii -- a problem and an idea

26 the code Book User Library Printer part ii -- a problem and an idea
class Book { private String title; private String author; private String isbn; private PostScript ps; private User borrower; public Book(String t, String a, String i, PostScript p) { title = t; author = a; isbn = i; ps = p; } public User get_borrower() {return borrower;} public void set_borrower(User u) {borrower = u;} public PostScript get_ps() { return ps; } class User { private String name; Library theLibrary; Printer thePrinter; public User(String n) { name = n; } public boolean getBook (String title) { Book aBook = theLibrary.getBook(this, title); thePrinter.print(this,aBook); return true; } class Library { Hashtable books; Library(){ books = new Hashtable(100); } public Book getBook(User u, String title) { System.out.println("REQUEST TO GET BOOK " + title); if(books.containsKey(title)) { Book b = (Book)books.get(title); System.out.println("getBook: Found it:" + b); if (b != null) { if (b.get_borrower() == null) { b.set_borrower(u); return b; return null; I know that no one can see this. But just bear with me for a second. (You’ll see in a while why I wanted to show it to you.) For now, I just want you to trust me about the fact that it: - also has that good separation - aligns with the design - so issues about the book are all here, … so… Printer public class Printer { String status = “Idle”; Vector jobs; public boolean print (User u, Book b) { PostScript ps=get_ps(b); Job newJob = new Job (ps, u.get_name()); return queue(newJob); } boolean queue(Job j) { //... return true; part ii -- a problem and an idea

27 all is well design is natural code looks like the design
good separation of concerns localized in the design localized in the code handled explicitly all is well. breaking the system down into component objects has produced a natural design, with good structure. the program structure is also good, and in fact “the program looks like the design” this has allowed us to achieve good separation of concerns, in both the design and the code. the localization is good, and what is going on is quite explicit in the code. That in turn allows us to work with the system effectively. The reason this works well is because the concept and mechanisms of component fit this problem -- they provide the dual synergy I mentioned: the design concept we used is a natural fit to the different concerns we want to separate; and the programming technology we used is a natural fit to that design concept. This is the goal I laid out before being very well achieved -- this is the component concept and its underlying implementation mechanisms at their best. part ii -- a problem and an idea

28 a distributed digital library
but now lets go back and deal with a small concern having to do with network efficiency. remember, this is a distributed system, so network efficiency matters. One thing that we want to be careful about is that the three server machines could be located at work, connected by a fast network, but the user might be at home, connected by a slow network. We want to minimize the traffic going across the home link. ——————————— meta-notes below here In this section, we want to show is a case where (some combination of): - the design concerns aren’t clearly separated in the design - the design concerns aren’t clearly separated in the code - the code doesn’t look like the design - the code is a tangled mess [the point being we need to show that the key points of the prior story aren’t working, alluding back to the original goals slide] part v -- conclusions

29 minimizing network load
dataflow patterns Computer A Computer D Computer B Computer C library library printer book to think about this problem it is best to think about what happens when the system is running. When we watch the system run we see some interesting dataflow patterns emerge. [WORK WITH THE ANIMATION] Some books start at a library, flow to another library, flow to the user and stop. Other books flow from library, to library to user and on to the printer. user part ii -- a problem and an idea

30 minimizing network load
controlling slot copying library library printer method invocations book printable format title, author, isbn WORK WITH THE ANIMATION - but remember about the internal implementation of books: inside books there are some very small data members, like the title and the author and the isbn number; and some very large data memebers, like the scanned in images of all the pages, or the PDF file or something - now if this user is working from home, its fine for the small items to go home and back, but we’d like the very large data members to take a shortcut like this so what we’d like to have happen is easy enough to say -- there are these patterns of book flow that come up in the computation, and when they do, we’d like the slot copying to to be implemented this special way. easy to say, how easy is it to implement? ——————————— if we explicitly say “copy some fields lazy some of the time” it helps to set up the “why don’t you just use a lazy language” question. that’s a good question, because it goes to the explicit criteria of good soc. user part ii -- a problem and an idea

31 the code revised Book User Library Printer
class Book { private BookID id; private PostScript ps; private UserID borrower; public Book(String t, String a, String i, PostScript p) { id = new BookID(t,a,i); ps = p; } public UserID get_borrower() {return borrower;} public void set_borrower(UserID u) {borrower = u;} public PostScript get_ps() { return ps; } public BookID get_bid() { return id; } class BookID { private String title; private String author; private String isbn; public BookID(String t, String a, String i) { title = t; author = a; isbn = i; public String get_title() {return title;} class User { private UserID id; Library theLibrary; Printer thePrinter; public User(String n) { id = new UserID(n); } public boolean getBook (String title) { BookID aBook=null; try{ aBook = theLibrary.getBook(id, title); } catch (RemoteException e) {} try { thePrinter.print(id, aBook); return true; } public UserID get_uid() { return id; } class UserID { private String name; public UserID(String n) { name = n; } public String get_name() { return name; } interface LibraryInterface extends Remote { public BookID getBook(UserID u, String title) throws RemoteException; public PostScript getBookPS(BookID bid) throws RemoteException; } class Library extends UnicastRemoteObject implements LibraryInterface { Hashtable books; Library() throws RemoteException { books = new Hashtable(100); public BookID getBook(UserID u, String title) throws RemoteException { System.out.println("REQUEST TO GET BOOK " + title); if(books.containsKey(title)) { Book b = (Book)books.get(title); System.out.println("getBook: Found it:" + b); if (b != null) { if (b.get_borrower() == null) { b.set_borrower(u); return b.get_bid(); return null; public PostScript getBookPS(BookID bid) if (books.containsKey(bid.get_title())) { Book b = (Book)books.get(bid.get_title()); if (b != null) return b.get_ps(); Printer well here’s the revised code… now you can see why I wanted to show you the code before… I’ll go into more detail in a minute, but I think this high-level view actually gives us an important perspective on the problem. what is evident is that the separation of concerns has broken down. we would have liked this concern to be isolated in a single new module or one of the old modules. but clearly it isn’t. we had to change code all over the place interface PrinterInterface extends Remote { public boolean print (UserID u, BookID b) throws RemoteException; } class Printer extends UnicastRemoteObject implements PrinterInterface { String status = “Idle”; Vector jobs; private Library theLibrary; public Printer() throws RemoteException{} public boolean print (UserID u, BookID b) throws RemoteException{ PostScript ps=null; try{ps = theLibrary.getBookPS(b);} catch (RemoteException e) {} Job newJob = new Job (ps, u.get_name()); return queue(newJob); boolean queue(Job j) { //... return true; part ii -- a problem and an idea

32 why? why did so much code change?
why wasn’t this concern well localized? why didn’t this “fit” the component structure? so the question is why? - part ii -- a problem and an idea

33 because… we are working with “emergent entities”, and
the component concept, and its associated implementation mechanisms, fundamentally don’t provide adequate support for working with emergent entities I want to suggest that what is making this complicated, the reason why it didn’t “fit the component structure” is that we are working with emergent entities -- I’ll say what I mean by that term in a moment -- and that the component concept and its implementation mechanisms fundamentally don’t provide adequate support for working with emergent entities. So that’s a very strong claim, let me try to back it up. part ii -- a problem and an idea

34 emergent entities printer library user book
what do I mean when I say ee? well this green dataflow pattern right here (point at it) is an emergent entity. so is this purple one. (there are other kinds which I will talk about later) ——— notes below here terminology check, be sure to say: interactions that give rise to ee, and aspects that control ee NOT implementation of ee emergent entities don’t exist explicitly in the code. there is no component, or module, or single line of code that we can point to and say “there, this is the place that implements this dataflow pattern” Instead, the runtime interactions of the entire system cause the entity to emerge, in an identifiable but not necessarily reified way during execution part ii -- a problem and an idea

35 emergent entities emerge1 during program execution are not components
printer library user book emerge1 during program execution from (possibly non-local) interactions of the components are not components do not exist explicitly in the component model or code 1 emerge: to become manifest; to rise from or as if from an enveloping fluid; come out into view - so is this path that we are willing to have the smaller data members follow - and this path that we want larger ones to follow emeregent entities are entities that arise in the actual computation -- the execution of the system -- but that don’t exist in a clear, crisp way in the component model or source code ——— notes We don’t mean the Santa Fe Instititute sense of emergent. We mean a simpler thing, that could just be called dynamic. part ii -- a problem and an idea

36 emergent entities emerge1 during program execution are not components
printer library user book emerge1 during program execution from (possibly non-local) interactions of the components are not components do not exist explicitly in the component model or code 1 emerge: to become manifest; to rise from or as if from an enveloping fluid; come out into view so this purple book flow path is an ee, because while it clearly comes up in the computation, there is no single component in the cm that is responsible for it ——— old stuff below here part ii -- a problem and an idea

37 are tough to handle because...
printer library user book why do ees tangle the code so badly? - for one thing, it is hard to deal with something that isn’t explicit - for another, the fact that they arise out of non-localized origins and interactions means that we need to get our fingers in lots of places in the code to “get at” them. that’s just what happened in the revised code -- we had to go to a bunch of different places to kind of gather up the “roots and contact points” of the emergent entity; to make it explicit enough that we could force it to work this other way all this is exacerbated because the emergent entities tend to do something I want to call cross-cutting of the component modularity segue: the easiest way to see what I mean by this is to kind of pick up these colored structures and lay them down on the component model... they are not explicit in the component model or code they have non-localized origins and interactions they cross-cut the component structure... part ii -- a problem and an idea

38 cross-cutting the components
title: string author: string isbn: int pdf: pdf tiff: tiff Library Book holds * cooperates User accesses Printer uses segue: the easiest way to see what I mean by this is to kind of pick up these colored structures and lay them down on the component model... notice that: - the two slices of the book class are just that, slices through the middle of the class; they don’t correspond to any particular superclass or subclass or anything - the paths don’t correspond to particular existing methods this is what I mean by “cross-cutting” -- these shapes cut across the modularity of the component model BUT, notice that while these ee’s cross-cut the component model, it isn’t that they don’t have any clear modularity. I would argue that these colored shapes look pretty good, and feel pretty clear, and make sense. In fact, I would argue that this combined picture looks pretty good and makes sense.[in fact this whole picture feels good and makes sense. This is very important. This fact that modularities can cross-cut but still be coherent, and furthermore that it is possible to draw a clear picture of both is very important. It is the basis for what we are going to try to do. the sub-parts of the objects are not existing classes the desired dataflows are not existing message sends part ii -- a problem and an idea

39 violates separation of concerns leads to tangled code
but, but, but... violates separation of concerns leads to tangled code the code can be remodularized to “fit” better... <imagine your own alternative class diagram here> Now you may be thinking — this is a lot of talk about a simple problem. that you could fix this very easily by reworking the classes and interfaces a little bit. you could split book into handle and printables and explicitly have printer go back and get it. be honest -- how many of you have been asking yourself why doesn’t he just restructure it a bit? Sure, its easy enough to restructure this, and as practitioners we have learned to do that and to do it reasonably well. But I no longer believe that kind of restructurung is good enough. Why? Well what’s wrong with the restructuring is that it messes up the separation of concerns. If you have two classes, one called a book handle and one called a book printable, then what you have is the clean functionality split into two places and the distribution concern mixed into both. It is now harder to change just one or the other. Consider what happens if someone says take that optimization back out of the code! The black and red code I showed was this remodularization after all! It’s also important to stress that while the confounding of concerns in this small case might be manageable (i.e. the black and red code might be manageable) as the system gets bigger it gets harder and harder to cope with. ——— notes lazy isn’t good enough. its not explicit enough, it probably doesn’t work since it might still suck the bits home and back, and any restructuring to support it breaks soc. part ii -- a problem and an idea

40 claim remodularizing isn’t good enough!
it ruins the separation of concerns the functionality and the network optimization concern are fundamentally different would like different “carvings”1 of the system in terms of component structure, and in terms of emergent entities, with support for the cross-cutting modularities So, here’s my claim about this system: - remodularizing the component model to deal with these emergent entities isn’t good enough - we need to do something better - we want to be able to work with different “carvings”, one carving about the components and their functionality and one carving about the emergent entities and controlling them - we want to be able to design and program directly in terms of these carvings [PAUSE] well, what if we just did that. let’s jump now to two pieces of imaginary code... 1 carve: to cut with care or precision, to cut into pieces or slices, to work as a sculptor or engraver part ii -- a problem and an idea

41 just try it dataflow Book {Library to User}
{copy: title, author, isbn}; dataflow Book {Library to Printer} {direct: pdf, tiff}; these two pieces of code are what I’m going to call aspects (definition later) what these two pieces of code do is that they attach to the ORIGINAL clean program, and control the ees the way we want to. So, without editing the original component code, these two programs achieve the same effect. these exact aspects are a bit idealized to make presenting the idea easier. a little later I’ll show you ones written in a slightly different language that actually do work let me tell you how to read them, and then I can come back and say some more general things about them ——— old stuff below here part ii -- a problem and an idea

42 what it says dataflow Book {Library to User}
{copy: title, author, isbn}; library library printer the dataflow of books, from library objects to user objects, should be implemented by copying the title, author and isbn slots only book what this piece of code (this aspect) says is that this ee, should have this slot copying. how does it say that? user part ii -- a problem and an idea

43 how it says it dataflow Book {Library to User}
identifies emergent entity dataflow Book {Library to User} {copy: title, author, isbn}; controls its implementation library library printer book well the first line identifies the ee -- think of it as reaching out into the computation and grabbing hold of the ee -- it says “when it happens that a book flows …” and then the second line uses this special copy directive to say that “copy the contents of these three slots as it goes along” ——— old stuff below here for later… theory of emergence user part ii -- a problem and an idea

44 cross-cutting dataflow Book {Library to User}
emergent dataflow Book {Library to User} {copy: title, author, isbn}; static Book Library title: string author: string isbn: int pdf: pdf postscript: ps I said that helping with the ways in which the two carvings cross-cut each other was a key thing we would need help with. Let’s look at how that is supported here. In the first line we’ve borrowed an idea from relational database query languages and more recently the Demeter system to make it possible to speak cleanly in terms of the cross-cutting modularity. We just reference the dataflow that we are after, independent of how it arises. In the second line we just enumerate the slots (they could be any subset of the slots including some defined by superclasses). (we could use more sophisticated means, like saying ALLs (size > 10), or their types or something) User Printer part ii -- a problem and an idea

45 and... dataflow Book {Library to Printer} {direct: pdf, tiff}; printer
user book having read that first aspect, you can now read this one pretty easily reaches out and identifies... it uses a different slot transfer directive... part ii -- a problem and an idea

46 assume a… a “language processor” that “weaver”
public class PrinterImpl { Vector jobs; String status = “Idle” pubilc get_status() { return status } public PrinterImpl() {} } jobs.add(j); public add_job(int j) { class Library { books = new Hashtable(100); Library(){ Hashtable books; public Book getBook(User u, String title) { Book b = (Book)books.get(title); if(books.containsKey(title)) { System.out.println("REQUEST TO GET BOOK " + title); System.out.println("getBook: Found it:" + b); b.set_borrower(u); if (b.get_borrower() == null) if (b != null) { return b; return null; Printer the; Printer Library theLibrary; private String name; class User { public User(String n) { name = n; } thePrinter.print(this,aBook); Book aBook = theLibrary.getBook(this, title); public boolean getBook (String title) { return true; class Book { private PostScript ps; private String isbn; private String title; private String author; private User borrower; title = t; public Book(String t, String a, String i, PostScript p) { author = a; ps = p; isbn = i; public void set_borrower(User u) {borrower = u;} public User get_borrower() {return borrower;} public PostScript get_ps() { return ps; } book: Book: {direct pages;} void print(Book book) { portal Printer { return: Book find (String title){ portal Library { Book: {copy title, author, isbn;} 4 classes 2 aspects printer library user book class Book { private UserID borrower; private PostScript ps; private BookID id; String i, PostScript p) { public Book(String t, String a, } ps = p; id = new BookID(t,a,i); public PostScript get_ps() { return ps; } public void set_borrower(UserID u) {borrower = u;} public UserID get_borrower() {return borrower;} public BookID get_bid() { return id; } private String title; class BookID { private String isbn; private String author; author = a; title = t; public BookID(String t, String a, String i) { isbn = i; public String get_title() {return title;} private UserID id; class User { Printer thePrinter; Library theLibrary; public User(String n) { id = new UserID(n); } BookID aBook=null; public boolean getBook (String title) { } catch (RemoteException e) {} aBook = theLibrary.getBook(id, title); try{ thePrinter.print(id, aBook); try { return true; public UserID get_uid() { return id; } class UserID { private String name; public String get_name() { return name; } public UserID(String n) { name = n; } public BookID getBook(UserID u, String title) throws RemoteException; interface LibraryInterface extends Remote { public PostScript getBookPS(BookID bid) throws RemoteException; class Library extends UnicastRemoteObject implements LibraryInterface { books = new Hashtable(100); Library() throws RemoteException { Hashtable books; System.out.println("REQUEST TO GET BOOK " + title); throws RemoteException { public BookID getBook(UserID u, String title) System.out.println("getBook: Found it:" + b); Book b = (Book)books.get(title); if(books.containsKey(title)) { b.set_borrower(u); if (b.get_borrower() == null) if (b != null) { return b.get_bid(); return null; public PostScript getBookPS(BookID bid) if (b != null) Book b = (Book)books.get(bid.get_title()); if (books.containsKey(bid.get_title())) { return b.get_ps(); interface PrinterInterface extends Remote { throws RemoteException; public boolean print (UserID u, BookID b) implements PrinterInterface { public class Printer extends UnicastRemoteObject private Library theLibrary; private Vector jobs = new Vector(10, 10); public Printer() throws RemoteException{} throws RemoteException{ PostScript ps=null; ps = theLibrary.getBookPS(b); return queue(newJob); Job newJob = new Job (ps, u); //... boolean queue(Job j) { “weaver” a “language processor” that accepts two kinds of code as input; produces “woven” output code, or directly implements the computation To help understand what I’m proposing, let me just say a quick word about implementation now. The idea is that there will be a special processor, that we call an “aspect weaver” that takes as input the two different kinds of programs (the classes and the aspects). That weaver could be a pre-processor type of language processor, in which case it would output the “woven” Java program. Or it could be compiler like, in which case it would output “woven” binary code (class files). Or it could be interpreter like, in which case it would just produce the computation. In any case, the intuition is that the weaver will take the programs written in cross-cutting modularities and weave them together somehow to produce the combined program. part ii -- a problem and an idea

47 general claim remodularizing the component structure is not a satisfactory way of dealing with emergent entities want different carvings of the system: keep the clean component structure control emergent entities in “natural terms” in terms of the emergent entity with support for cross-cutting So the general claim is… part ii -- a problem and an idea

48 emergent entities an entity that does not exist explicitly in the component model or code, but rather arises during execution data flows all the places this value goes... control states two methods running concurrently one method blocked on another all the callers of this function history of calls up to this point (aka the stack)... where ees are: … dataflow ees like the ones we’ve seen, but also others like…; emergent control states: two methods running concurrently (an ee you might want to talk about to prevent it from happening for example), ... now you know why concurrent code is hard to write. the things we try to control in concurrency are ees. but we aren’t using the right concepts and mechanisms, so we have to go all over the program “gathering up the origins and contact points of the ee” to control it part ii -- a problem and an idea

49 the “aspect” concept components are modular units of functionality
aspects are modular units of control over emergent entities in the distributed digital library: library component book component user component printer component lookup dataflow aspect printing dataflow aspect now I can say what an aspect is. whereas a component is …; an aspect is so I can now enumerate 6 modular units of the distributed digital library system. there are 4 classes … and 2 dataflow aspects. for me, this list at the bottom of this slide tells me there is something right about the aspect concept. It makes it possible to talk in a crisp (but abstract) way about 6 different “modules” in this system, even though they are of radically different kinds, they cross-cut. Until I had the aspect concept I had no way of succinctly grouping these kinds of concerns. The aspect concept does that for me. part ii -- a problem and an idea

50 “aspect languages” aspect languages
connect to a component language, and provide: a mechanism for referring to emergent entities a mechanism for exercising some control over the implementation of the emergent entities support for using cross-cutting modularities and an aspect language… Let me make a point here about what I mean by SOME control. Notice that the aspect shown here doesn’t say that books should flow to printers. What is says is that when a book flows to a printer, the slots should be transferred this way. So aspects are something that are always used together with components (i.e. classes), because they come into play when the component interactions give rise to emergent entities. this crucial property of aspects is reflected in aspect languages, in that aspect languages are always connected to a component language this doesn’t mean you can’t compile aspect programs, or write reusable aspects, or have library aspects or anything like that. It just mean that aspects do their work when they are woven in with components. dataflow Book {Library to Printer} {direct: pdf, tiff}; part ii -- a problem and an idea

51 summary so far programming languages support implementation and composition of components & aspects decomposition into components & aspects improved separation of concerns in both design and implementation good separation of concerns in both design and implementation complex systems design practices support decomposition programming languages support implementation and composition of components decomposition into components programming languages support implementation and composition [PAUSE, and then use the animation] OK, here’s the summary (so far): (1) we are working with complex systems, we need to manage that complexity (2) we want really good SOC, (the four properties I mentioned), “all the way through”, we need dual synergy (3) we’ve had this concept of component that in many cases provides that dual synergy [expand on this], leads to good SOC, is a winner (4) but, not all concerns fit the concept of component well, (and in turn they don’t fit the implementation mechanisms for components very well); in particular concerns involving emergent entities are a problem. the concept of component fundamentally isn’t enough to deal with them (5) I introduced a new concept of “aspect” to be used together with the concept of component, that better gets at concerns of controlling ees. (6) I showed how the aspect concept can help at the design level, and I speculated that we could build languages that would allow it to help at the programming level as well. part ii -- a problem and an idea

52 the AspectJ™ languages
III the AspectJ™ languages now what I’d like to do is quickly present our current instantiation of AOP.

53 AspectJ is… an extension to Java™
targeted at distributed and/or concurrent applications several general-purpose aspect languages remote data transfer aspect language computation migration aspect language coordination aspect language a weaver for those languages AspectJ is a set of extensions to Java targeted in particular at concurrent and/or distributed programs, as in the example I just did. AJ is a system we are using to drive our development of AO concepts, programming and tools. AJ is something we are going to make available to outside users on our web site. (We’d love to have some of you all use it.) We’ll be using feedback from those users to drive development of the language and tools. What is AJ? AJ is a set of several aspect languages that extend Java, and make it possible to program control over emergent entities using aspects as I’ve been showing. In this talk I’ll show you a couple of the aspect languages and the kinds of aspects that can be written with them. ——— other issues Like the rest of AOP, AspectJ includes some features that you can find in other languages. The difference is that we are building them completely within the framework of AOP, and so are trying to provide integrated conceptual and tool support for this style of working. Locally, Gail Murphy is doing some important user studies. part iii -- the AspectJ languages

54 a data transfer aspect language
provides control over data transfers between execution spaces transfer of arguments and/or return values control over sub-fields, sub-sub-fields etc. ot.m(op) op execution space 1 ot execution space 2 portal The first of the aspect languages is similar to the one in the examples from before. We call it a remote data transfer aspect language, and what it does, is that when you have an object on one machine [one execution space] here, that sends a message to an object on another machine here, then this aspect language makes it possible to control how the objects are passed in this call. It lets us control how the arguments are passed and how the return value is returned. It lets us control individual fields of the objects and sub-objects that are returned. Part of the language itself is based on some ideas found in the Demeter system, or actually any relational query languages. part iii -- the AspectJ languages

55 referring to the emergent entity
portal Library { Book find (String title){ return: Book: {copy title, author, isbn;} } ot.m(op) op execution space 1 ot execution space 2 In the current design of AspectJ, support for referring to the emergent entity is somewhat weaker than we would like -- the programmer has to refer to each of the methods that can cause an object of a given kind to flow. (We are working on this now.) ——— issues: part iii -- the AspectJ languages

56 copy transfer mode ot.m(op) op ot portal Library {
Book find (String title){ return: Book: {copy title, author, isbn;} } ot.m(op) op execution space 1 ot execution space 2 We currently provide three basic object modes passing modes. The first is “copy”, which says that when an object is passed in the call (or return), a complete copy of its implementation is passed over. Any method invocations on the object in the remote space run on that copy. ——— issues: - so that means the copy is divorced from the main object, so this is a “semantic change”. That’s right, in AspectJ, the separation of concerns is not between “semantics” and “implementation”. It’s between “basic functionality” and “aspects of distribution”. Its the inherent nature of distribution that dealing with it has semantic implications, our goal with AspectJ is to make that clear, not to try and paper over it. part iii -- the AspectJ languages

57 gref transfer mode ot.m(op) op ot portal Library {
Book find (String title){ return: Book: gref; } ot.m(op) op execution space 1 ot execution space 2 “Global reference” mode means that a reference is sent over, and that any accesses to that reference will then communicate back (do another remote method invocation). part iii -- the AspectJ languages

58 direct transfer mode op ot.m(op) ot portal Printer {
void print(Book book) { book: Book: {direct pages;} } op ot.m(op) ot Finally, “direct” mode means that if the originator of the call has a global reference, then a copy should be passed to the receiver, but the copy should originate from the original source. Note that these transfer modes compose with each other, and that means that aspects that use them compose with each other. I’ll say more about this later, but notice that it isn’t the same way that functionality composes. Instead, these aspects compose along dataflow paths. This is an example of how the aspect language helps us to think in the cross-cutting modularity we want to. execution space 0 part iii -- the AspectJ languages

59 the aspect language cross-cuts OOP
when you send: this kind of object, as this argument of this method, send this field this way portal Printer { void print(Book book) { book: Book: {direct pages;} } Now I said before that the key thing that aspect languages need to do is help us work with the cross-cutting modularities we want to. Let me show a little more about how this language does that. The aspect language builds on the terms of OO programming, so its easy to read. What this aspect says is that when …[use the animation] So in this example we are talking about two classes -- the print method on Printer and the pages field of Book -- in one single place. There’s no way to say this in Java, because it violates the modularity principles of Java. But, by providing this cross-cutting modularity, we make it possible for the programmer to express this important aspect of there system in a natural way, in its own natural modularity. part iii -- the AspectJ languages

60 aspect composition cross-cuts too
these aspects compose along dataflows not along normal class/method composition portal Library { Book find (String title){ return: Book: {copy title, author, isbn;} } We currently provide three basic object modes passing modes. The first is “copy”, which says that when an object is passed in the call (or return), a complete copy of its implementation is passed over. Any method invocations on the object in the remote space run on that copy. ——— issues: - so that means the copy is divorced from the main object, so this is a “semantic change”. That’s right, in AspectJ, the separation of concerns is not between “semantics” and “implementation”. It’s between “basic functionality” and “aspects of distribution”. Its the inherent nature of distribution that dealing with it has semantic implications, our goal with AspectJ is to make that clear, not to try and paper over it. portal Printer { void print(Book book) { book: Book: {direct pages;} } printer library user book part iii -- the AspectJ languages

61 more on cross-cutting * * Book Library Author User Printer
portal Library { Book find(String title) { return: {copy title, author, isbn; Author bypass books;} } Let me try to show a little more about the importance of having programming language support for dealing with concerns with cross-cutting modularities. In this example, AOP, by improving separation of concerns, can be a big help with program evolution. What’s happening here is we’ve decided to make a change to the class graph (the object structure). We’ve decided that author should be a class instead of just a string, that author objects should keep a list of their books, and that books should keep a list of their authors. But now we’ve introduced a cycle into the graph and we have to decide how to break that cycle. (This has now moved from being a potentially optional optimization to an absolutely essential one.) So here’s the required change to the aspect from before. It’s still going to copy just the title, author and isbn. But now, when its copying the author, it is going to skip the list of books. So its breaking the author to book link in the cycle. In this case, this single piece of code is contributing to an aspect of the implementation of three classes You can’t say this in Java, because the Java modularity rules prevent it. But because we can say this, because we have this aspect language, we can say, in one single place, something that must be said, and that would otherwise be spread into three classes. We believe this makes code maintenance much easier, because the modularity of the system is much more clear and well captured. ——— At this point, someone may complain that this “violates modularity”. The answer is that it does -- it violates the modularity of the functionality, and the modularity that Java is trying to support. But, this has to be programmed as part of this application. Written this way, in its natural modularity, we believe is a better way to write it than tangled through the code. We believe it has better separation of concerns. part iii -- the AspectJ languages

62 what this is and isn’t weaver combines two kinds of code
equivalent effect of complex tangled code equivalent elegance of original clean code component code is unchanged natural modularity of aspects class Book { private UserID borrower; private PostScript ps; private BookID id; String i, PostScript p) { public Book(String t, String a, } ps = p; id = new BookID(t,a,i); public PostScript get_ps() { return ps; } public void set_borrower(UserID u) {borrower = u;} public UserID get_borrower() {return borrower;} public BookID get_bid() { return id; } private String title; class BookID { private String isbn; private String author; author = a; title = t; public BookID(String t, String a, String i) { isbn = i; public String get_title() {return title;} private UserID id; class User { Printer thePrinter; Library theLibrary; public User(String n) { id = new UserID(n); } BookID aBook=null; public boolean getBook (String title) { } catch (RemoteException e) {} aBook = theLibrary.getBook(id, title); try{ thePrinter.print(id, aBook); try { return true; public UserID get_uid() { return id; } class UserID { private String name; public String get_name() { return name; } public UserID(String n) { name = n; } public BookID getBook(UserID u, String title) throws RemoteException; interface LibraryInterface extends Remote { public PostScript getBookPS(BookID bid) throws RemoteException; class Library extends UnicastRemoteObject implements LibraryInterface { books = new Hashtable(100); Library() throws RemoteException { Hashtable books; System.out.println("REQUEST TO GET BOOK " + title); throws RemoteException { public BookID getBook(UserID u, String title) System.out.println("getBook: Found it:" + b); Book b = (Book)books.get(title); if(books.containsKey(title)) { b.set_borrower(u); if (b.get_borrower() == null) if (b != null) { return b.get_bid(); return null; public PostScript getBookPS(BookID bid) if (b != null) Book b = (Book)books.get(bid.get_title()); if (books.containsKey(bid.get_title())) { return b.get_ps(); interface PrinterInterface extends Remote { throws RemoteException; public boolean print (UserID u, BookID b) implements PrinterInterface { public class Printer extends UnicastRemoteObject private Library theLibrary; private Vector jobs = new Vector(10, 10); public Printer() throws RemoteException{} throws RemoteException{ PostScript ps=null; ps = theLibrary.getBookPS(b); return queue(newJob); Job newJob = new Job (ps, u); //... boolean queue(Job j) { aspect weaver public class PrinterImpl { Vector jobs; String status = “Idle” pubilc get_status() { return status } public PrinterImpl() {} jobs.add(j); public add_job(int j) { class Library { Library(){ public Book getBook(User u, String title) { return b; Printer the; Printer public User(String n) { name = n; } thePrinter.print(this,aBook); Book aBook = theLibrary.getBook(this, title); private User borrower; private PostScript ps; public Book(String t, String a, String i, PostScript p) { public void set_borrower(User u) {borrower = u;} public User get_borrower() {return borrower;} book: Book: {direct pages;} void print(Book book) { portal Printer { return: Book find (String title){ portal Library { Book: {copy title, author, isbn;} 4 classes 2 aspects I want to use this picture to make it clear just what we do and don’t have here with AspectJ (and AOP in general): - The programmer writes 6 modular units of code, 4 classes and 2 aspects. - The effect of this code is as if the programmer had written the tangled code on the right. - But the programmer didn’t write the code on the right. They wrote the code on the left that has a nice modularity. [That’s possible because the aspect language supports an alternate modularity fromthat of the component structure.] - It’s compatible with Java because the functionality language, that we call JCore, is basically Java, and because the output of the weaver is ordinary Java. part iii -- the AspectJ languages

63 a coordination aspect language
public class Shape { protected int x = 0; protected int y = 0; protected int w = 0; protected int h = 0; int getX() { return x; } int getY() { return y; } int getWidth(){ return w; } int getHeight(){ return h; } void adjustLocation() { x = longCalculation1(); y = longCalculation2(); } void adjustSize() { w = longCalculation3(); h = longCalculation4(); coordinator Shape { selfex adjustLocation; selfex adjustSize; mutex {adjustLocation, getX}; mutex {adjustLocation, getY}; mutex {adjustSize, getWidth}; mutex {adjustSize, getHeight}; } The second aspect language I want to talk about today is a “coordination aspect language”. It allows us to control multi-thread synchronization in a concurrent Java program. What I’m going to show you is the higher-level declarative parts of this language. It also has some imperative constructs that I won’t have time to go into. So what I’ve got here is a simple Shape class… [tell the story]. Walk through slowly enough to be able to say and this mutex says… and this one says… Notice that getX is not mutex with getY. You can’t say this easily in ordinary Java because the modularity of synchronization is completely lined up with the modularity of classes -- all you can synchronized is methods and classes. Because AspectJ gives us constructs with cross-cutting modularity, we can say the thing we want to say easily. part iii -- the AspectJ languages

64 fits object-oriented modularity
static coordinator Shape { selfex adjustLocation, adjustSize; mutex {adjustLocation, getX}; mutex {adjustLocation, getY}; mutex {adjustSize, width}; mutex {adjustSize, height}; } shape per-object per-class coordinator Shape { selfex adjustLocation, adjustSize; mutex {adjustLocation, x}; mutex {adjustLocation, y}; mutex {adjustSize, width}; mutex {adjustSize, height}; } shape per-object let me say a bit more about how this language works and how it helps support cross-cutting modularities like the RDTA language, it is built in terms of object concepts. And it can fit into object modularity. The default… part iii -- the AspectJ languages

65 cross-cuts object-oriented modularity
per-object per-class multi-class any methods per-object per-class multi-class static coordinator Shape, Screen { selfex adjustLocation, adjustSize; mutex {adjustLocation, getX}; mutex {adjustLocation, getY}; mutex {adjustSize, width}; mutex {adjustSize, height}; } shape screen can be any classes, they don’t have to be sup/super or siblings. can be any methods, defined in the class or in supers. part iii -- the AspectJ languages

66 status of AspectJ some preliminary user studies complete
results promising, not yet conclusive first public release to go on web-site shortly free use (including in products) weaver, documentation, example programs coordination aspect language only next release early June remote data transfer aspect language later releases other aspect languages, operate directly on class files… AspectJ is a set of extensions to Java targeted in particular at concurrent and/or distributed programs, as in the example I just did. AJ is a system we are using to drive our development of AO concepts, programming and tools. AJ is something we are going to make available to outside users on our web site. (We’d love to have some of you all use it.) We’ll be using feedback from those users to drive development of the language and tools. What is AJ? AJ is a set of several aspect languages that extend Java, and make it possible to program control over emergent entities using aspects as I’ve been showing. In this talk I’ll show you a couple of the aspect languages and the kinds of aspects that can be written with them. ——— other issues Like the rest of AOP, AspectJ includes some features that you can find in other languages. The difference is that we are building them completely within the framework of AOP, and so are trying to provide integrated conceptual and tool support for this style of working. Locally, Gail Murphy is doing some important user studies. part iii -- the AspectJ languages

67 IV implementing aspect weavers jump to conclusion
Now I’d like to talk just a little bit about implementing aspect weavers. Not going to say much, there isn’t time in this talk for that. We can talk some more in questions or afterwards. First let me make an important point: just like most people don’t need to implement a Java compiler, most people don’t need to implement an aspect weaver. That’s part of why this section will be so short, most people simply don’t need to know it. I’m going to go through this using the pre-processor model of aspect weaver implementation. There’s good reasons to start implementing aspect weavers that way, but that certainly isn’t the only way to implement them, they could also be interpreter like, or native-compiler like. Weavers for AspectJ could even operate directly on class files, they wouldn’t need the source code for the component program. jump to conclusion

68 what aspect weavers do implement one or more aspect languages
allow us to program in alternate modularity in the modularity of the emergent entity help with cross-cutting aspect weaver must “gather up the roots and contact points of emergent entities” places spread around the OO program this can appear difficult... to understand the issues of how aspect weavers are implemented, it helps to think about what the aspect weaver has to do, to think about that it helps to remember what we have to do when we try to control emergent entities without AOP… we end up having to go throughout the OO program to find all the different places that give rise to the ee… places where the ee contacts the OO program. and once we have them in hand we have to do some coordinated editing operation on them so that’s what the aspect weaver has to do -- it has to gather up the roots and contact points of the emergent entity and do a coordinated edit on all of them that can appear to be pretty complex… for example part iv -- implementing aspect weavers

69 “frob every method call”
class Library { Hashtable books = new Hashtable(100); } public Book find(User u, String title) { frob(); if(books.containsKey(title)) { Book b = (Book)books.get(title); if (b != null) { if (b.getBorrower() == null) {frob(); b.setBorrower(u);} return b; return null; a very simple example… suppose that a given aspect weaver needs to “frob every method call”. I’ve shown this by… This isn’t just a silly example. The AspectJ aspect weaver has to do something similar to this for both the aspect languages I just showed you. now that doesn’t appear so bad in this VERY simple example, but when the program gets larger, and the number of different kinds of weavings that have to happen goes up, this quickly gets un-manageable. but there’s another way of thinking about it, that makes it much simpler to implement. That is by thinking about it as a “domain tranform” problem. part iv -- implementing aspect weavers

70 domain transforms v v t f time domain frequency domain
what I mean when I say a “domain transform” is just like what the circuits designers mean when they say it. Circuit designers talk about working in both the time domain and the frequency domain. In the time domain they draw pictures like this… here I’ve got a square wave In the frequency domain they draw pictures like this… these pictures are different, but related, in fact these two pictures are of the same signal. what’s important about these two domains is that certain things that are diffuse in one domain (one picture) are localized in the other. So this figure localizes in this single value something which is spread all over in this figure, namely the frequency of the waveform the circuit designers use the Fourier transform to automatically move back and forth between the two domains what is diffuse in one domain is local in another the Fourier transform moves between the two it localizes what was non-local and vice-versa part iv -- implementing aspect weavers

71 reflection links two domains
the object domain: localizes books and their functionality the meta domain: localizes “frob every method call” meta call_method {all} { frob(); } class Library { Hashtable books; Library() { books = new Hashtable(100); } public Book find(User u, String title) { if(books.containsKey(title)) { Book b = (Book)books.get(title); if (b != null) { if (b.get_borrower() == null) b.set_borrower(u); return b; return null; now it turns out that we can do domain transforms too… the technology called computational reflection lets us work in two different, but connected domains. we can write standard OO code, that we are all familiar with, but then when we need to do something like “frob all method calls”, we can shift over to the meta-domain, where that operation will be highly localized, in a piece of code like this. so reflection let’s us write these two different programs, which can be executed together to have the same effect as the much messier program before. I think this is really THE KEY INSIGHT of REFLECTION -- that it is possible to arrange a programming system where we program in terms of two different, but coordinated domains. (the point is that this isn’t like a sub-component -- its written in one place, but unlike a sub-component, it takes effect in many places, without editing those places) part iv -- implementing aspect weavers

72 aspect weavers can require a variety of domain-transforms
method calls (all, per-class, per-selector…), field accesses (…), methods (…); who else is running where will this value go next reflection unfolding CPS conversion partial evaluation abstract interpretation we have found that thinking this way, in terms of domain transforms, makes the implementation of aspect weavers more manageable. its lets us weave multiple aspects into a component program without getting confused. as we’ve worked on this there’s a number of domain transforms we’ve found useful, (no single weaver needs to do all these) And we’ve been having pretty good success using not just reflection, but also a number of other programming language techniques to support this domain-transform approach. So that’s really the core of aspect weaving. It’s built on a lot of existing technology that we use in a different way -- that we use to support well-defined domain transforms. part iv -- implementing aspect weavers

73 V conclusions So I’m going to wrap-up now, first by presenting a one-slide summary of the argument structure of this talk. You can use this to remember the rationale for AOP. Then I’ll take just a minute to tell you a deeper dream of mine, which has to do with where AOP can take us not just in the next ten years, but also beyond that. I hope that will be both exciting and give you another perspective on what AOP is and can do.

74 summary improved separation of concerns in both design and implementation complex systems design practices support decomposition into components components & aspects aspect-oriented design aspect-oriented programming programming languages support implementation and composition of components walk it through in now familiar way So here’s a one-slide summary of the argument I presented in this talk: - the complexity of the systems we build is overwhelming -> soc - component (concept and mechanism) has done well for us - but concerns involving emergent entities cross-cut component structure and component code -> tangling - aspect (concept and mechanism) helps by better supporting concerns that emergent entities. Aspects are program constructs that cross-cut the constructs of the primary programming language and so help programs to implement concerns that cross-cut the primary program. I don’t know how long it will take AOP to mainstream. Starting pretty much right away, systems like AspectJ will be able to help “early adopters” improve programs that deal with these kinds of emergent entities. Improving the separation of concerns in those programs, will make them easier to develop, easier to maintain, more reusable etc. Starting to use AOP in the near-term will be like starting to use Cfront was 10 years ago. ——— issues should aoD be off the slides? (post AOP-ICSE98 workshop I’m inclined to think so.) part v -- conclusions

75 designing and building a simple bridge...
an analogy (what I hope aspects are like) but now let me share another perspective about AOP—its my personal deep hope about where AOP can take the field in the longer-term future. the best way to explain it is by an analogy to other engineering disciplines, because the hope is that AOP can help us take an important step closer to them. if you ask a mechanical engineer to build you a simple bridge, say out of three wooden blocks like this... designing and building a simple bridge... part v -- conclusions

76 different kinds of picture
simple statics simple dynamics m more detailed statics then you will see them draw several different kinds of pictures [USE THE ANIMATION] this first one is a picture that captures the simple static forces in the bridge this second one captures those static forces in more detail, and as a result makes it possible to look at important issues the first picture hides, like the varying shear force in the horizontal member but this third picture is very different. It is a picture of the simple dynamic properties of the bridge. This picture is made up of completely different kinds of elements than the first two. It isn’t more or less detailed, it is just different. You can take the pictures on the left apart into smaller and smaller pieces and you’ll never find the picture on the right. In fact, as soon as you take the picture on the left apart, BAM the picture on the right disappears. What’s going on is that the picture on the right is about emergent properties of the bridge. Its view of the bridge cross-cuts the other views, it isn’t more or less detailed. This ability, to look at (analyze in terms of) different kinds of cross-cutting properties is very important. Other engineering disciplines have used it to great advantage to deal with extremely complex systems. (get predictable and useful in here) part v -- conclusions

77 a distributed digital library
So now you can probably start to see where I’m going (but don’t stop listening yet, because there is a funny little turn in here). My hope is that in the future, when we ask computing engineers to build a system like this... part v -- conclusions

78 different kinds of picture
title: string author: string isbn: int pdf: pdf tiff: tiff Library Book holds * cooperates User accesses Printer uses They will be able to draw pictures like this, and like this… But, just drawing those pictures isn’t enough! There’s already lots of good work on techniques that do some elements of that. Too see why, notethat these pictures, like the bridge before are only models of the real computation. Models work by abstracting properties of the real system to get to the view they want. As such, it is only “moderately difficult” to get to models that cross-cut each other in predictable and useful ways. (how long did it take to get the time-freq and the Fourier transform?) But in software we have a different problem. That’s because in software we have these funny things called programs and while programs feel a lot like models in many ways (they are formal systems), they also have the property that they can be automatically run to produce the the actual computation. That fact is, I believe, the special thing about software—the software is both model-like and can automatically produce the computation we are after. If we want to preserve that special fact, we have to do something a bit more than just draw cross-cutting pictures. What we have to be able to do is write programs that cross-cut each other in predictable and useful ways. modeling of functionality modeling of control over emergent entities part v -- conclusions

79 different kinds of program
class Library { Hashtable books; Library() { books=new Hashtable(10); } . portal Printer { void print(Book book) { book: Book: {direct pages;} } coordinator User, Library { mutex {checkOut, checkIn}; } programming with different carvings of the system allows clean separation of: programming of functionality programming of control over emergent entities We want to be able to do is program in terms of different carvings of the world. not just: - carvings at different levels of detail, which we’ve had for some time, or - systems in which different parts are carved different ways, which we’ve also had for some time, but - the ability to program the same part in terms of two cross-cutting carvings that “weave together” to produce the complete computation we want that’s what AOP is about. [building on reflection] AOP is about making it possible to program this way. and it that sense, I’m hopeful that AOP can take us a step towards the natural counterpart for us, in computing, of the other engineer’s use of multiple cross-cutting models of their systems part v -- conclusions

80 objects & aspects AOP enables modular control over emergent entities
title: string author: string isbn: int pdf: pdf tiff: tiff Library Book holds * cooperates User accesses Printer uses So that’s it. AOP, the rationale, the idea, the current state, and where we are going with it. I think its going to be useful, and I think its going to be exciting. I hope a few of you will have been interested enough by this talk to try it out and come along for the ride. AOP enables modular control over emergent entities using languages that support cross-cutting modularities part v -- conclusions

81 object & aspect programs
class Library { Hashtable books; Library() { books=new Hashtable(10); } . portal Printer { void print(Book book) { book: Book: {direct pages;} coordinator User, Library { mutex {checkOut, checkIn}; AOP enables modular control over emergent entities using languages that support cross-cutting modularities part v -- conclusions

82

83


Download ppt "- Dijkstra, A discipline of programming, 1976"

Similar presentations


Ads by Google