You are here: Home

Modified items

All recently modified items, latest first.
RPMPackage zest.releaser-3.43-1.lbn13.noarch
zest.releaser is collection of command-line programs to help you automate the task of releasing a software project. It's particularly helpful with Python package projects, but it can also be used for non-Python projects. For example, it's used to tag buildouts - a project only needs a version.txt file to be used with zest.releaser. It will help you to automate: * Updating the version number. The version number can either be in setup.py or version.txt. For example, 0.3.dev0 (current) to 0.3 (release) to 0.4.dev0 (new development version). * Updating the history/changes file. It logs the release date on release and adds a new section for the upcoming changes (new development version). * Tagging the release. It creates a tag in your version control system named after the released version number. * Uploading a source release to PyPI. It will only do this if the package is already registered there (else it will ask, defaulting to 'no'); the Zest Releaser is careful not to publish your private projects! It can also check out the tag in a temporary directory in case you need to modify it.
RPMPackage zest.pocompile-1.3-2.lbn13.noarch
This package compiles po files. It contains a zest.releaser entry point and a stand-alone command line tool. Goal You want to release a package that has a locales dir (or locale, or something else as long as it has a LC_MESSAGES folder somewhere in it) with translations in .po files. You want to include the compiled .mo files in your release as well, but you do not want to keep those in a revision control system (like subversion) as they are binary and can be easily recreated. That is good. This package helps with that. Want .mo files? Add a MANIFEST.in file. When you use python setup.py sdist to create a source distribution, distutils (or setuptools or distribute or distutils2) knows which files it should include by looking at the information of the revision control system (RCS). This is why in the case of subversion you should use a checkout and not an export: you need the versioning information. (For other RCS or for subversion 1.7+ you currently need to install extra packages like setuptools-git.)
RPMPackage xlhtml-0.5-14.fc18.armv6hl
The xlhtml program will take an Excel 95, or 97 file as input and convert it to HTML. The output is via standard out so it can be re-directed to files or piped to filters or used as a gateway to the internet. pptHtml program converts PowerPoint files to HTML.
RPMPackage xlhtml-0.5-11.fc12.x86_64
The xlhtml program will take an Excel 95, or 97 file as input and convert it to HTML. The output is via standard out so it can be re-directed to files or piped to filters or used as a gateway to the internet. pptHtml program converts PowerPoint files to HTML.
RPMPackage wv2-0.4.1-3.fc13.x86_64
wv is a library which allows access to Microsoft Word files. It can load and parse Word 2000, 97, 95 and 6 file formats. (These are the file formats known internally as Word 9, 8, 7 and 6.) There is some support for reading earlier formats as well: Word 2 docs are converted to plaintext.
RPMPackage wicked-1.1.10-3.lbn13.noarch
wicked is a compact syntax for doing wiki-like content linking and creation in zope and plone
RPMPackage webcouturier.dropdownmenu-2.3.1-2.lbn13.noarch
Overview You will get the dropdown menus for those items in global navigation that have the subitems. Submenus are build based on the same policy as the Site Map, so it will show the same tree as you would get in the Site Map or navigation portlet being in appropriate section. Requires plone.browserlayer to be installed in your site. How it works Dropdown menus are build based on the same policy as the Site Map, so it will show the same tree as you would get in the Site Map or navigation portlet being in appropriate section. This means - no private objects for anonymouses; no objects, excluded from the navigation - exactly the same behavior you would expect from Site Map or navigation portlet.
RPMPackage transmogrify.xmlsource-1.0-2.lbn13.noarch
Simple xml reader for a transmogrifier pipeline
RPMPackage transmogrify.webcrawler-1.2.1-2.lbn13.noarch
A source blueprint for crawling content from a site or local html files. Webcrawler imports HTML either from a live website, for a folder on disk, or a folder on disk with html which used to come from a live website and may still have absolute links refering to that website. To crawl a live website supply the crawler with a base http url to start crawling with. This url must be the url which all the other urls you want from the site start with.
RPMPackage transmogrify.sqlalchemy-1.0.1-2.lbn13.noarch
Feed data from SQLAlchemy into a transmogrifier pipeline
RPMPackage transmogrify.siteanalyser-1.3-2.lbn13.noarch
Transmogrifier blueprints that look at how html items are linked to gather metadata about items. transmogrify.siteanalyser.defaultpage Determines an item is a default page for a container if it has many links to items in that container. transmogrify.siteanalyser.relinker Fix links in html content. Previous blueprints can adjust the '_path' and set the original path to '_origin' and relinker will fix all the img and href links. It will also normalize ids. transmogrify.siteanalyser.attach Find attachments which are only linked to from a single page. Attachments are merged into the linking item either by setting keys or moving it into a folder. transmogrify.siteanalyser.title Determine the title of an item from the link text used.
RPMPackage transmogrify.regexp-0.5.0-1.lbn13.noarch
transmogrify.regexp allows you to use regular expressions and format strings to search and replace key values in a transmogrifier pipeline.
RPMPackage transmogrify.print-0.5.0-1.lbn13.noarch
Transmogrifier blueprint to print pipeline item keys
RPMPackage transmogrify.ploneremote-1.3-2.lbn13.noarch
transmogrifier.ploneremote is package of transmogrifier blueprints for uploading content via Zope XML-RPC API to a Plone site. Plone site does not need any modifications, but vanilla Zope XML-RPC is used.
RPMPackage transmogrify.pathsorter-1.0b4-2.lbn13.noarch
transmogrify.pathsorter is a blueprint for reordering items into tree sorted order
RPMPackage transmogrify.htmlcontentextractor-1.0-1.lbn13.noarch
Helpful transmogrifier blueprints to extract text or html out of html content. transmogrify.htmlcontentextractor.auto ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This blueprint has a clustering algorithm that tries to automatically extract the content from the HTML template. This is slow and not always effective. Often you will need to input your own template extraction rules. In addition to extracting Title, Description and Text of items the blueprint will output the rules it generates to a logger with the same name as the blueprint. Setting debug mode on templateauto will give you details about the rules it uses. :: ... DEBUG:templateauto:'icft.html' discovered rules by clustering on 'http://...' Rules: text= html //div[@id = "dal_content"]//div[@class = "content"]//p title= text //div[@id = "dal_content"]//div[@class = "content"]//h3 Text: TITLE: ... MAIN-10: ... MAIN-10: ... MAIN-10: ... Options ------- condition TAL Expression to control use of this blueprint debug default is '' transmogrify.htmlcontentextractor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This blueprint extracts out title, description and body from html either via xpath, TAL or by automatic cluster analysis Rules are in the form of :: (title|description|text|anything) = (text|html|optional|tal) Expression Where expression is either TAL or XPath For example :: [template1] blueprint = transmogrify.htmlcontentextractor title = text //div[@class='body']//h1[1] _delete1 = optional //div[@class='body']//a[@class='headerlink'] _delete2 = optional //div[contains(@class,'admonition-description')] description = text //div[contains(@class,'admonition-description')]//p[@class='last'] text = html //div[@class='body'] Note that for a single template e.g. template1, ALL of the XPaths need to match otherwise that template will be skipped and the next template tried. If you'd like to make it so that a single XPath isn't nessary for the template to match then use the keyword `optional` or `optionaltext` instead of `text` or `html` before the XPath. When an XPath is applied within a single template, the HTML it matches will be removed from the page. Another rule in that same template can't match the same HTML fragment. If a content part is not useful (e.g. redundant text, title or description) it is a way to effectively remove that HTML from the content. To help debug your template rules you can set debug mode. For more information about XPath see - http://www.w3schools.com/xpath/default.asp - http://blog.browsermob.com/2009/04/test-your-selenium-xpath-easily-with-firebug/ HTMLContentExtractor ==================== This blueprint extracts out fields from html either via xpath rules or by automatic cluster analysis transmogrify.htmlcontentextractor --------------------------------- You can define a series of rules which will get applied to the to the '_text' of the input item. The rules use a XPATH expression or a TAL expression to extract html or text out of the html and adds it as key to the outputted item. Each option of the blueprint is a rule of the following form :: (N-)field = (optional)(text|html|delete|optional) xpath OR (N-)field = (optional)tal tal-expression "field" is the attribute that will be set with the results of the xpath "format" is what to do with the results of the xpath. "optional" means the same as "delete" but won't cause the group to not match. if the format is delete or optional then the field name doesn't matter but will still need to be unique "xpath' is an xpath expression If the format is 'tal' then instead of an XPath use can use a TAL expression. TAL expression is evaluated on the item object AFTER the XPath expressions have been applied. For example :: [template] blueprint = transmogrify.htmlcontentextractor title = text //div[@class='body']//h1[1] _permalink = text //div[@class='body']//a[@class='headerlink'] _text = html //div[@class='body'] _label = optional //p[contains(@class,'admonition-title')] description = optional //div[contains(@class,'admonition-description')]/p[@class='last']/text() _remove_useless_links = optional //div[@id = 'indices-and-tables'] mimetype = tal string:text/html text = tal python:item['_text'].replace('id="blah"','') You can delete a number of parts of the html by extracting content to fields such as _permalink and _label. These items won't get used be set used to set any properties on the final content so are effective as a means of deleting parts of the html. TAL expressions are evaluated after XPath expressions so we can post process the _text XPath to produce a text stripped of a certain id. N is the group number. Groups are run in order of group number. If any rule doesn't match (unless its marked optional) then the next group will be tried instead. Group numbers are optional. Instead of groups you can also chain several blueprints togeather. The blueprint will set '_template' on the item. If another blueprint finds the '_template' key in an item it will ignore that item. The '_template' field is the remainder of the html once all the content selected by the XPATH expressions have been applied. transmogrify.htmlcontentextractor.auto -------------------------------------- This blueprint will analyse the html and attempt to discover the rules to extract out the title, description and body of the html. If the logger output is in DEBUG mode then the XPaths used by the auto extrator will be output to the logger.
RPMPackage transmogrify.filesystem-1.0b6-1.lbn13.noarch
Transmogrifier source for reading files from the filesystem This package provides a Transmogrifier data source for reading files, images and directories from the filesystem. The output format is geared towards constructing Plone File, Image or Folder content. It is also possible to add arbitrary metadata (such as titles and descriptions) to the content items, by providing these in a separate CSV file.
RPMPackage transmogrify.extract-0.4.0-1.lbn13.noarch
This Transmogrifier blueprint extracts text from within the specified CSS id.
RPMPackage transmogrify.dexterity-1.0-1.lbn13.noarch
The transmogrify.dexterity package provides a transmogrifier pipeline section for updating field values of dexterity content objects. The blueprint name is transmogrify.dexterity.schemaupdater. The schemaupdater section needs at least the path to the object to update. Paths to objects are always interpreted as being relative to the context. Any writable field who's id matches a key in the current item will be updated with the corresponding value. Fields that do not get a value from the pipeline are initialized with their default value or get a missing_value marker. This functionality will be moved into a separate constructor pipeline... The schmemaupdater section can also handle fields defined in behaviors.
RPMPackage tl.eggdeps-0.4-2.lbn13.noarch
The eggdeps tool reports dependencies between eggs in the working set. Dependencies are considered recursively, creating a directed graph. This graph is printed to standard output either as plain text, or as an input file to the graphviz tools. Usage eggdeps [options] [specifications] Specifications must follow the usual syntax for specifying distributions of Python packages as defined by pkg_resources. * If any specifications are given, the corresponding distributions will make up the roots of the dependency graph, and the graph will be restricted to their dependencies. * If no specifications are given, the graph will map the possible dependencies between all eggs in the working set and its roots will be those distributions that aren't dependencies of any other distributions. Options -h, --help show this help message and exit -i IGNORE, --ignore=IGNORE project names to ignore -I RE_IGNORE, --re-ignore=RE_IGNORE regular expression for project names to ignore -e DEAD_ENDS, --dead-end=DEAD_ENDS names of projects whose dependencies to ignore -E RE_DEAD_ENDS, --re-dead-end=RE_DEAD_ENDS regular expression for project names whose dependencies to ignore -x, --no-extras always omit extra dependencies -n, --version-numbers print version numbers of active distributions -1, --once in plain text output, include each distribution only once -t, --terse in plain text output, omit any hints at unprinted distributions, such as ellipses -d, --dot produce a dot graph -c, --cluster in a dot graph, cluster direct dependencies of each root distribution -r, --requirements produce a requirements list -s, --version-specs in a requirements list, print loosest possible version specifications The -i, -I, -e, and -E options may occur multiple times. If both the -d and -r options are given, the one listed last wins. When printing requirements lists, -v wins over -s. The script entry point recognizes default values for all options, the variable names being the long option names with any dashes replaced by underscores (except for --no-extras, which translates to setting extras=False). This allows for setting defaults using the arguments option of the egg recipe in a buildout configuration, for example. Details The goal of eggdeps is to compute a directed dependency graph with nodes that represent egg distributions from the working set, and edges which represent either mandatory or extra dependencies between the eggs. Working set The working set eggdeps operates on is defined by the egg distributions available to the running Python interpreter. For example, these may be the distributions activated by easy_install or installed in a zc.buildout environment. If the graph is to be calculated to such specifications that not all required distributions are in the working set, the missing ones will be marked in the output, and their dependencies cannot be determined. The same happens if any distribution that is either specified on the command line or required by any other distribution is available in the working set, but at a version incompatible with the specified requirement. Graph building strategies The dependency graph may be built following either of two strategies: Analysing the whole working set: Nodes correspond exactly to the distributions in the working set. Edges corresponding to all conceivable dependencies between any active distributions are included, but only if the required distribution is active at the correct version. The roots of the graph correspond to those distributions no other active distributions depend upon. Starting from one or more eggs: Nodes include all packages depended upon by the specified distributions and extras, as well as their deep dependencies. They may cover only part of the working set, as well as include nodes for distributions that are not active at the required versions or not active at all (so their dependencies can not be followed). The roots of the graph correspond to the specified distributions. Some information will be lost while building the graph: * If a dependency occurs both mandatorily and by way of one or more extras, it will be recorded as a plain mandatory dependency. * If a distribution A with installed extras is a dependency of multiple other distributions, they will all appear to depend on A with all its required extras, even if they individually require none or only a few of them. Reducing the graph In order to reduce an otherwise big and tangled dependency graph, certain nodes and edges may be omitted. Ignored nodes: Nodes may be ignored completely by exact name or regular expression matching. This is useful if a very basic distribution is a depedency of a lot of others. An example might be setuptools. Dead ends: Distributions may be declared dead ends by exact name or regular expression matching. Dead ends are included in the graph but their own dependencies will be ignored. This allows for large subsystems of distributions to be blotted out except for their "entry points". As an example, one might declare zope.app.* dead ends in the context of zope.* packages. No extras: Reporting and following extra dependencies may be switched off completely. This will probably make most sense when analysing the working set rather than the dependencies of specified distributions. Output There are two ways eggdeps can output the computed dependency graph: plain text (the default) and a dot file to be fed to the graphviz tools. Plain text output The graph is printed to standard output essentially one node per line, indented according to nesting depth, and annotated where appropriate. The dependencies of each node are sorted after the following criteria: * Mandatory dependencies are printed before extra requirements. * Dependencies of each set of extras are grouped, the groups being sorted alphabetically by the names of the extras. * Dependencies which are either all mandatory or by way of the same set of extras are sorted alphabetically by name. As an illustrating example, the following dependency graph was computed for two Zope packages, one of them required with a "test" extra depending on an uninstalled egg, and some graph reduction applied: zope.annotation zope.app.container * zope.component zope.deferredimport zope.proxy zope.deprecation zope.event zope.dublincore zope.annotation ... [test] (zope.app.testing) * Brackets []: If one or more dependencies of a node are due to extra requirements only, the names of those extras are printed in square brackets above their dependencies, half-indented relative to the node which requires them. Ellipsis ...: If a node with further dependencies occurs at several places in the graph, the subgraph is printed only once, the other occurences being marked by an ellipsis. The place where the subgraph is printed is chosen such that * extra dependencies occur as late as possible in the path, if at all, * shallow nesting is preferred, * paths early in the alphabet are preferred. Parentheses (): If a distribution is not in the working set, its name is parenthesised. Asterisk *: Dead ends are marked by an asterisk. Dot file output In a dot graphics, nodes and edges are not annotated with text but colored. These are the color codes for nodes, later ones overriding earlier ones in cases where more than one color is appropriate: Green: Nodes corresponding to the roots of the graph. Yellow: Direct dependencies of any root nodes, whether mandatory or through extras. Lightgrey: Dead ends. Red: Nodes for eggs installed at a version incompatible with some requirement, or not installed at all. Edge colors: Black: Mandatory dependencies. Lightgrey: Extra dependencies. Other than being highlighted by color, root nodes and their direct dependencies may be clustered. eggdeps tries to put each root node in its own cluster. However, if two or more root nodes share any direct dependencies, they will share a cluster as well. Requirements list All the distributions included in the graph may be output as the Python representation of a list of requirement specifications, either * listing bare package names, * including the exact versions as they occur in the working set, or * specifying complex version requirements that take into account all version requirements made for the distribution in question (but disregard extras completely for the time being). Complex version requirements always require at least the version that occurs in the working set, assuming that we cannot know the version requirements of past versions but reasonably assume that requirements might stay the same for future versions. The list is sorted alphabetically by distribution name.