MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "Shape:_Pointiness",
        "continue": "gapcontinue||"
    },
    "warnings": {
        "main": {
            "*": "Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/postorius/lists/mediawiki-api-announce.lists.wikimedia.org/> for notice of API deprecations and breaking changes."
        },
        "revisions": {
            "*": "Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."
        }
    },
    "query": {
        "pages": {
            "168": {
                "pageid": 168,
                "ns": 0,
                "title": "Render Configuration",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "= Engines =\n\nThe choice of engine decides how lights are traced in the scene.<br>\nLuxCoreRender offers the following engines (integrators):\n\n* '''Path''' (CPU/OpenCL): Unidirectional pathtracer that casts rays from the camera. Samples the whole film progressively. Supports all AOVs.\n* '''Tiled Path''' (CPU/OpenCL): Unidirectional pathtracer that is almost the same as '''Path''', but uses a special sampler which iterates over the image in tiles (this leads to a slightly lower RAM usage). The OpenCL version adapts the number of tiles that are rendered at once to the performance of the compute devices (GPUs/CPUs), so the tile size does not matter a lot (if rendering performance is bad due to small tiles, the number of rendered tiles is increased automatically). When the last tile of a pass is reached, it is split among the compute devices. Supports all AOVs.\n* '''Bidir''' (CPU only): Bidirectional pathtracer that casts rays from both camera and light sources. Samples the whole film progressively. Supports only a subset of AOVs: RGB, RGBA, ALPHA, DEPTH and SAMPLECOUNT. It is recommended to combine this engine with the '''Metropolis''' sampler.\n\n= Samplers = \n\nThe sampler decides how points on the film are chosen (in which area to fire more/less light rays).<br>\nLuxCoreRender offers the following samplers:\n\n* '''Sobol:''' Random sampler with an improved noise pattern. Supports adaptive sampling to spend more samples on noisy areas of the image.\n* '''Metropolis: ''' Sampler that spends more samples on bright areas of the image, thus rendering caustics much better than the other samplers. The main disadvantage, especially when rendering on the GPU, is the higher RAM usage than the other samplers. It is not recommended to use this sampler for rendering on GPUs.\n* '''Random:''' Simple random sampler. Supports adaptive sampling to spend more samples on noisy areas of the image. In almost all cases, the Sobol sampler is better suited.\n\n= Clamping =\n\nAll engines support \"variance clamping\" of samples. The clamping value has to be chosen to fit the brightness in the scene. \n\n= Caches =\n\nCaches are pre-computed before the rendering of film samples starts. They can accelerate the rendering of their respective area of expertise tremendously. Caches can be saved to disk and re-used, which can for example be used for animations where only the camera moves.\n\n* '''[[ PhotonGI ]] Indirect Light Cache:''' Use this cache if the indirect light in the scene is noisy (for example in indoor scenes)\n\n* '''[[ PhotonGI ]] Caustics Cache:''' Use this cache if there are SDS-caustics in the scene (e.g. caustics in mirror, pool with camera above surface, underwater scene with caustics reflected from water surface, etc.)\n\n* '''Environment light cache:''' Use this cache if direct light from world background is noisy (for example in indoor scenes lit by sky or HDRI through small windows). Do not use it in open scenes, as it can be detrimental to performance in that case. Note that this cache may produce visible artifacts if aggressive (too low) clamping values are used. Raise the clamping value if this happens.\n\n* '''Direct Light Sampling Cache:''' Use this cache in scenes with many light sources, where most of the lights only affect a limited area around them (e.g. a skyscraper with hundreds of rooms, each room lit by one light source). This cache makes direct light sampling in such situations much more efficient. Note that this cache may produce visible artifacts if aggressive (too low) clamping values are used. Raise the clamping value if this happens.\n\n= Light Strategy =\n\nThe light strategy controls how much processing power is spent on each light in the scene.<br>\nAll lights in the scene can be sampled with the same probability (''uniform'' light strategy), but if some light sources are much brighter than others, the image will be more noisy because a lot of processing power is spent on weak lights that are not contributing much to the lighting.<br>\nThe solution to this problem are the ''power'' and ''log power'' light strategies. They dedicate more processing power to lights that are brighter.<br>\n\nThe following light strategies are available:\n\n* Uniform\n* Power\n* Log Power (default)\n\nThe sampling probability of a light can also be influenced with the importance setting of the light source:\n\n'''Importance:''' How much processing power to spend on this light source compared to other light sources. \nUsed to scale the light importance computed by the light strategy. For instance, if you set a ''uniform'' light strategy, a light with a user importance of 2.0 will be sampled 2 times more often than one with 1.0.\nIf you use a ''power'' light strategy, the user importance will be multiplied by the light power."
                    }
                ]
            },
            "90": {
                "pageid": 90,
                "ns": 0,
                "title": "Shadowcatcher",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "=Overview=\n\nSince version 1.6 LuxCore materials have the ''shadowcatcher'' option. See [http://www.luxrender.net/forum/viewtopic.php?f=8&t=12687 this forum thread] for the original announcement.<br>\nIt is intended to be used for compositing, e.g. to integrate a rendered 3D object into a photograph.\n\n'''A shadowcatcher material will be transparent where hit by direct light, while shadowed areas will stay opaque.'''<br>\nThe opacity of the shadow can be controlled by the opacity/transparency of the material.<br>\nThe color of the shadow can be influenced by the material color.\n\nAny material can be a shadowcatcher, but it does not make sense to use a purely specular material without any diffuse reflection, like glass or mirror.<br>\nUsually matte is the material best suited for a shadowcatcher.\n\n=Supported Engines=\nThe shadowcatcher option is supported by the ''Path'' and ''Tiled Path'' engines (both CPU and OpenCL), but not by the ''Bidir'' and ''BidirVCM'' engines.\n\nIt is possible, however, to use the alpha channel generated by a rendering using the path engine to compose the images.\n\n<!--\n=Example=\nThis is a scene with a LuxBall as 3D object, lit by an infinite light (\"hemi\" in Blender).<br>\nNote how the lower half of the environment image is black in order to not cast any shadows onto the shadow catcher later.\n\n[[File:Shadowcatcher_redball_raw.jpg|500px]]\n\nComposited over a background image:\n\n[[File:Shadowcatcher_redball_without.jpg|500px]]\n\nAnd now with a groundplane using a shadowcatcher material:\n\n[[File:Shadowcatcher_redball_with.jpg|500px]]\n-->\n\n=Tips=\n* When using a sky light as environment, enable the \"groundcolor\" option and set the groundcolor to black\n* When using an imagemap as environment (e.g. a HDRI), enable the \"only sample upper hemisphere\" option\nDoing this prevents the environment to cast lights onto the shadowcatcher from below, leading to better alpha transparency."
                    }
                ]
            }
        }
    }
}