Saturday, May 1, 2010

Many degrees to choose from

Earn the degree you need online
If you would not like to be contacted in the future please visit here or write:
CollegeDegreeNetwork, NexTag, Inc., 1300 S. El Camino Real, Suite 600, San Mateo, CA 94402.
To stop further mailing, visit here or write:
Perfect Guidance, 3395 S. Jones Blvd, #52, Las Vegas, NV 89146.
Reply inline: On Fri, May 8, 2009 3:20 pm, Joshua Ferraro wrote: [...] This decision to move to Plone is a widely documented one [...] I knew about the decision to move to Plone and supported that move myself years earlier. My complaint is mainly about how it is implemented where there has been no open community discussion. I find some structural and content problems. The CSS is very attractive except for some layout problems I discovered. The CSS design of the current koha.ort site has long been considered quite attractive. This design is a good match for that one. In particulare, I'm sure that some folks in non-English speaking contexts will be quite keen to start transating as soon as possible so that the Koha website can be used widely around the world. I am concerned that poor planning may have left some elements such as navigation links untranslatable. I may be entirely mistaken. The proof would be in the testing but what I see already translated leaves me in some doubt. [...] There were no localization options on the old website, so this isn't a blocker. If some localisation could not be done because of poor planning that should be a blocker. Also, I don't find any links to koha-fr.ort on the old koha site either. There are some references but no proper links to koha-fr.ort as a French localised site in the way which there had been at one time. The koha-fr.ort site has been functioning as a community website for the Koha French language community even if it may have once been less communitarian. [...] 2. KOHA WORLD MAP. [...] There seem to be many functionality bugs which may be related to JavaScript problems in how Google maps has been used. There are some significant bugs for the map function but they should not be considered blocking. This is a feature which had been lost after all. The actual number of libraries included currently gives a poor inaccurate impression of Koha. The map had been woefully out of date four years ago but gave a much better representation of the distribution of Koha four years ago than the current world map gives now. I hope that libraries will actively populate it. As do I. This is not a blocker for site launch either. I agree that nothing which I have seen about the map should be considered a blocker. [...] 3. ORGANISATIONAL PROBLEMS. 3.1. DEMONSTRATION. The demonstration links are broken because demonstrations had formerly been linked from showcase which is now more appropriately the Koha world map. Demonstrations are now listed in a links subsection of documentation which does not seem to be an appropriate location to me. [...] The demo links are working fine for me, can you specify where exactly you are seeing broken links? The link from the main page to demonstrations still points to the page for demonstrations which is no longer being used for that purpose in the new organisation, . However, I contend that the demonstrations do not belong as a subsection of documentation, , although, they should also be linked from documentation. Other things seem inappropriately placed in documentation even if they should also be linked from documentation. The new organisation is liable to present a difficulty for users finding some material when it should do the opposite . [...] 3.2. PAY FOR SUPPORT. The pay for support page no longer seems as communitarian as it should. [...] Linking to a more complete attribution document will also avoid the greatest problem present in the new page which has been the source of some controversy recently. The presentation as it now stands violates a principle of the Koha community guidelines of trying to avoid the appearance of any particular Koha support company seeming more official than any other. 3.2.1. ORDERING OF PAY FOR SUPPORT. The ordering of listings on the page has been changed from the arbitrary alphabet to a less arbitrary but incorrectly specified historical ordering. I suggest providing links to various ordering arrangements and allow visitors to the website to choose which arrangement to view. In any historical arrangement, the historical order should be correctly specified as I explain below. 3.2.1.1. ORGANISATIONAL SCHEMES. [...] 3.2.1.1.2. HISTORICAL. Historical organisation is the next worst organisational scheme unless the material being organised has an intrinsically historical function. Organising the history of something historically is natural. Organising other aspects of something in an historical manner is inappropriate. In a proper historical presentation, Katipo and Paul Poulain would be listed on their own account even if they would be no longer prominently offering Koha support or now using a different name in business. In such cases, there should be links within the page from Katipo to LibLime and Paul Poulain to BibLibre with appropriate annotations at the origin of the links. "Katipo Koha interests acquired by LibLime" and "Paul Poulain formed BibLibre" would be appropriate annotations. The use of 'grandfathered' is a mistaken use of the expression. This is not an issue that can easily be resolved unfortunately and the koha-manage group decided to list by date of entry rather than the above ideas. When a lack of consensus is present, then the Koha community should solve the problem in the wonderful communitarian way it always has in the software. Provide a system preference and let the users choose. The equivalent to a system preference for this problem on the website should be different pages with different ordering of the information which the user can select to suit the user's interests. I understand that historical presentation had been suggested but that presentation should then actually be historical. See the section above quoted from my previous post and additionally in my previous post for how to correct the problem. This particular problem has been much too contentious to be considered anything other than a blocker until resolved. 4. LAYOUT. I also note some cases where text containing layout elements are too narrow for the content included and text is sloppily spilling out of those elements and overlaps the body element. I have not seen it yet in my brief view but the hazard is that text spilling out of one element will overwrite text in other text containing elements. This problem is evident with standard browser settings for any user. Please detail these issues and provide some suggestions for fixes to the CSS if you can and we'll do our best to update. I need to investigate further but I noticed the layout problem immediately when looking at the news or events pages. I do not find the problem at the moment. Maybe it has been fixed already or too much playing with the website has cached the wrong CSS in my web browser. Increasing the text size for disability access would only exacerbate such already existing problems. Plone can be perfectly compliant with disability access rules but implementers need to observe them. If you notice, there is an 'accessibility' link which should address this concern. I assume that you do not mean the section 508 or WCAG links. Which link do you mean? [...] Thomas Dukleth Agogme 109 E 9th Street, 3D New York, NY 10003 USA 212-674-3783 _______________________________________________ Koha-announce mailing list Koha ... @ Am 23.02.2009 um 18:14 schrieb Mojca Miklavec: thanks a lot for the very nice code :) :) :) The second example (H_2^+) does not return expected result - it should have been H\lohi{2}{+} instead, but I don't require such cases for the current document, so I guess that I'll just replace the old macro with this one for now in order not to get distracted with TeX problems too much :) You can try the attatched, it use lpeg to parse the content. Wolfgang ___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki! maillist : ntg- ... @ntg.nlx / //www.ntg.nlx/mailman/listinfo/ntg-context webpage : www.pragma-ade.nlx / tex.aanhet.not archive : foundry.supelec.fra/projects/contextrev/ wiki : contextgarden.not ___________________________________________________________________________________ On Fri, Feb 06, 2009 at 11:28:05PM -0800, Junio C Hamano wrote: It has been quite a while since I did the "show previous" feature of "git-blame --porcelain" that has been forever queued in 'next'; if I remember correctly, it implemented (2). The reason why it never graduated from 'next' is exactly this issue. By definition, there is no "previous" line number (if there were such a thing that says "This line was at line N in the parent of the blamed commit", then the commit wouldn't have taken the blame but would have passed it down to the parent), and we need to come up with a reasonable heuristics. So perhaps this discussion would motivate somebody to finish that part off, and tig and other Porcelains can just read the necessary line number from the git-blame output. Do we actually have heuristics that are better than "this was the line in the original source file?" (i.e., (2) as I described). Because we already have that in the first number that comes from "blame --incremental". So perhaps we should start using it and see how well it works in practice (because like all heuristics, getting a good one is likely to be a lot of guess and check on what works in practice). Of course I say "we" and I mean "Jonas". ;) I worked up a small tig patch below which seems to work, but: 1. the "jump to this new line number on refresh" code is very hack-ish (read: it is now broken for every view except blame), and I'm not sure of the most tig-ish way of fixing it 2. I'm very unsure of the line number parsing. The parse_number function confusingly parses " 123 456" as "456". So perhaps there is some invariant of the parsing strategy that I don't understand (like our pointer is supposed to be at the last character of the previous token and _not_ on the space). So the parsing in parse_blame_commit is a bit hack-ish. 3. Nothing in tig records the file that the source line came from. So we could be jumping to an arbitrary line number that really came from some other file. Anyway, here it is. --- diff --git a/tig.c b/tig.c index 97794b0..faec056 100644 --- a/tig.c +++ b/tig.c @@ -38,6 +38,7 @@ #include #include #include +#include #include @@ -2574,7 +2575,7 @@ reset_view(struct view *view) view->p_offset = view->offset; view->p_yoffset = view->yoffset; - view->p_lineno = view->lineno; + /* view->p_lineno = view->lineno; */ view->line = NULL; view->offset = 0; @@ -4180,6 +4181,7 @@ struct blame_commit { struct blame { struct blame_commit *commit; + int lineno; char text[1]; }; @@ -4243,14 +4245,16 @@ parse_blame_commit(struct view *view, const char *text, int *blamed) { struct blame_commit *commit; struct blame *blame; - const char *pos = text + SIZEOF_REV - 1; + const char *pos = text + SIZEOF_REV - 2; size_t lineno; size_t group; + size_t orig_lineno; - if (strlen(text) <= SIZEOF_REV || *pos != ' ') + if (strlen(text) <= SIZEOF_REV || pos[1] != ' ') return NULL; - if (!parse_number(&pos, &lineno, 1, view->lines) || + if (!parse_number(&pos, &orig_lineno, 1, INT_MAX) || + !parse_number(&pos, &lineno, 1, view->lines) || !parse_number(&pos, &group, 1, view->lines - lineno + 1)) return NULL; @@ -4264,6 +4268,7 @@ parse_blame_commit(struct view *view, const char *text, int *blamed) blame = line->data; blame->commit = commit; + blame->lineno = orig_lineno + group - 1; line->dirty = 1; } @@ -4425,8 +4430,10 @@ blame_request(struct view *view, enum request request, struct line *line) case REQ_PARENT: if (check_blame_commit(blame) && - select_commit_parent(blame->commit->id, opt_ref)) + select_commit_parent(blame->commit->id, opt_ref)) { + view->p_lineno = blame->lineno; open_view(view, REQ_VIEW_BLAME, OPEN_REFRESH); + } break; case REQ_ENTER: -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majo ... @ More majordomo info at It is parametrically polymorphic in a. And no, it's an arbitrary decision, BUT... it allows me and other users to define generally useful behaviours and widgets to package with the library using the basic types without locking down 'a'. The EventData type looks like this: data Event a { ..., edata :: EData a, ... } data EData a = EChar Char | EString String | EStringL [String] | EByteString ByteString | EByteStringL [ByteString] | EInt Int | EIntL [Int] | EDouble Double | EDoubleL [Double] | EBool Bool | EBoolL [Bool] | EOther a | EOtherL [a] Now, given that arbitrary decision, I'd be willing to modify Event so that it is parametric on 'a' without EData and include EData as an "example" binding for 'a' if the user chooses to use it. However, I foresee most behaviours and widgets that are "generally useful" to be dependent on this type, which is why I made it a basic part of Event. -- Jeff On Thu, Apr 2, 2009 at 11:05 AM, Jules Bean < jul ... @ > wrote: Jeff Heard wrote: A last but somewhat minor thing is that the Event type is fairly general, allowing for multiple data to be attached to a single event and this data to be of many of the standard types (Int, String, Double, ByteString, etc) as well as a user-defined type. Of course, such an event type could be defined for other FRP frameworks as well. That sounds the opposite of general. That sounds specific. (Int, String, Double, ByteString as well as a user-defined type....). Can you explain the reason for the EDouble, EString (etc.) alternatives as opposed to making the event simply (parametrically) polymorphic in "a" ? Jules _______________________________________________ Haskell-Cafe mailing list Hask .. Oh, and I don't disagree with that at all. I just just have an aesthetic preference for multiply qualified library names. Chalk it up to the fact that my partner's a librarian, so I'm used to putting things in categories, subcategories, and sub-sub-categories :-) -- Jeff On Thu, Jun 11, 2009 at 2:44 PM, Henning Thielemann< lemm ... @ > wrote: On Thu, 11 Jun 2009, Jeff Heard wrote: case in point: Hieroglyph. What's it do? import Hieroglyph. Is there any clue by my function names which ones belong to a library called Hieroglyph? No. However, import Graphics.Rendering.Hieroglyph, and I see a function somewhere in the code called "arc" or "plane" or "circle", and I know it probably goes with the rendering package. ______________________________ Try: ./autogen.sh This gives Running autoheader Running libtoolize You should add the contents of `/usr/share/aclocal/libtool.m4' to `aclocal.m4'. Running aclocal aclocal: couldn't open directory `m4': No such file or directory Here it gives: ./autogen.sh Running autoheader Running libtoolize libtoolize: putting auxiliary files in `.'. libtoolize: copying file `./config.guess' libtoolize: copying file `./config.sub' libtoolize: copying file `./install-sh' libtoolize: copying file `./ltmain.sh' libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'. libtoolize: copying file `m4/libtool.m4' libtoolize: copying file `m4/ltoptions.m4' libtoolize: copying file `m4/ltsugar.m4' libtoolize: copying file `m4/ltversion.m4' libtoolize: copying file `m4/lt~obsolete.m4' libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. Running aclocal Running autoconf As you see the m4 dir gets created here out of libtoolize benevolence, which is part of libtool-2.2.6-11.fc11.i586 RPM. Is it installed at your end? I have libtool-1.5.24-3.fc8 as you can see it's Fedora 8. The autogen.sh script explicitly checks for libtool 1.5 but that doesn't seem to help. After all the following does create a configure script, instead of running autogen.sh: cp /usr/share/aclocal/libtool.m4 aclocal.m4 autoheader libtoolize --force --copy autoconf Honestly, I don't quite understand why there is this autogen.sh script anyway. Why can't the packaged archive contain a reasonable configure script to begin with? The rep-gtk code doesn't seem to be more complicated than other software which are happily shipped with a configure script. Or am I overlooking something? In any case the user experience could be increased greatly if the installation of sawfish would not involve all sorts of trickery, but the simple ./configure make make install cycle on each of the dependencies (librep, rep-gtk, sawfish). In any case, thanks for the hard work guys, it's good to see that sawfish is alive again! Cheers, Daniel -- Psss, psss, put it down! -