Planta Med 2014; 80(14): 1182-1199
DOI: 10.1055/s-0034-1383061
Reviews
Georg Thieme Verlag KG Stuttgart · New York

How to Translate a Bioassay Into a Screening Assay for Natural Products: General Considerations and Implementation of Antimicrobial Screens

Adyary Fallarero
Division of Pharmaceutical Biosciences, Centre for Drug Research, Faculty of Pharmacy, University of Helsinki, Helsinki, Finland
,
Leena Hanski
Division of Pharmaceutical Biosciences, Centre for Drug Research, Faculty of Pharmacy, University of Helsinki, Helsinki, Finland
,
Pia Vuorela
Division of Pharmaceutical Biosciences, Centre for Drug Research, Faculty of Pharmacy, University of Helsinki, Helsinki, Finland
› Institutsangaben
Weitere Informationen

Correspondence

Adyary Fallarero
Division of Pharmaceutical Biosciences, Centre for Drug Research, Faculty of Pharmacy, University of Helsinki
Viikinkaari 5E
Viikki Biocenter, 00014
Finland
Telefon: +35 84 42 83 49 33   

Publikationsverlauf

received 15. Juli 2014
revised 27. August 2014

accepted 27. August 2014

Publikationsdatum:
15. September 2014 (online)

 

Abstract

Natural product sources have been a valuable provider of molecular diversity in many drug discovery programs and several therapeutically important drugs have been isolated from these. However, the screening of such materials can be very complicated due to the fact that they contain a complex mixture of secondary metabolites, but also the purified natural compounds exert a challenge for bioactivity screening. Success in identifying new therapeutics using in vitro bioassays is largely dependent upon the proper design, validation, and implementation of the screening assay. In this review, we discuss some aspects which are of significant concern when screening natural products in a microtiter plate-based format, being partly applicable to other assay formats as well, such as validation parameters, layouts for assay protocols, and common interferences caused by natural products samples, as well as various troubleshooting strategies. Examples from the field of natural product drug discovery of antibacterial compounds are discussed, and contributions from the realm of academic screenings are highlighted.


Abbreviations

ARS: aminoacyl-t-RNA synthetase
CV: coefficient of variation
DLS: dynamic light scattering
ECIS: electric cell-substrate impedance sensing
EthD-1: ethidium homodimer-1
Fab: fatty acid biosynthesis
FP: fluorescence polarization
FRET: fluorescence resonance energy transfer
GFP: green fluorescent protein
HCS: high-content screening
HTS: high-throughput screening
LUO: laboratory unit operation
MTS: medium-throughput screening
MTT: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide
NP: natural products
PC: photonic crystal
S/B: signal-to-background ratio
S/N: signal-to-noise ratio
SPR: surface plasmon resonance
SW: signal window
T3SS: type 3 secretion system
TEM: transmission electron microscopy
uHTS: ultrahigh-throughput screening
Z′: screening window coefficient

Introduction

Biomolecular screening involves the exploration of libraries of pure compounds or extracts for their effects on relevant targets, either actual biomolecules or biological events, which leads to the identification of actives (“hits”) that can eventually be further developed for preclinical testing (“leads”) or used as chemical tools. Based on the throughput, different terms have been coined (see below), but these definitions have been relative and the use has varied among authors. HTS generally implies that between 10 000 and 100 000 compounds (or samples) per day are screened, while MTS refers to campaigns involving around 1000 to 10 000 compounds per day [1]. Screening a lower amount of a sample is then regarded as “low-throughput screening”, while the other extreme, uHTS, implies that more than 100 000 data points are generated per day [1], [2]. Regardless of the throughput, three basic and interconnected elements have been reasonably argued as critical for the success of screening campaigns: 1) targets, 2) screening methods, and 3) chemical libraries.

The selection of a single target has typically been driven by the disease relevance and the chemical tractability. While the one-target approach has allowed for the exploration of very large compound collections in a fast and cost-effective manner, it has also been a reductionist by ignoring the biochemical networks inherently existing in biological systems. Because of that, recent views have shifted the target selection process towards finding a set of multiple targets associated with the desired clinical effects [3]. That relates to the second aspect – the selection of screening methods. The more general choice has been between biochemical and cell-based assays. With the resurgence of polypharmacology and multiple-target approaches, cell-based methods, in particular phenotypic and pathway assays, have become more prevalent when performing screenings. For the first screens, also known as “primary” screens, using either biochemical or cell-based assays, compounds are tested only once and the follow-up strategies are decided based upon these results. This implies that the assay has to be shown to perform robustly. In general, any level of automation of in vitro assays increases throughput, decreases human errors and need for labor, and improves safety issues. The third critical element is the selection of compounds to be screened. Very large synthetic chemical collections of millions of compounds have now been made available, but the positive impact of using natural compound collections is undeniable, especially in certain therapeutic areas.

Natural compounds have provided milestone drugs, i.e., taxol, statins, and cyclosporine, with very successful roles in modern pharmacotherapy. For instance, antibacterial drugs of natural origin (natural compounds or their semisynthetic modifications) account for more than 65 % of all antibacterial drugs approved over the last 30 years [4]. Another example is provided by anticancer drugs, in which a staggering 50 % of all existing medications are natural products or molecules derived therefrom [5]. Besides their roles as lead compounds, natural products provide starting scaffolds for structural optimization via combinatorial chemistry or serve as chemical tools for the validation of new molecular targets [6]. They are also a more successful choice when screening against protein-protein targets, because they tend to be larger and, in many cases, more complex than synthetic libraries [7].

When aimed specifically at the exploration of natural products and the development of screening assays that encounter specific challenges. The screening of natural origin samples can involve crude extracts (10–1000 or more compounds), semi-purified mixtures (5–10 compounds), or structure-elucidated purified single compounds. This structural complexity of the extracts has historically been the most daunting factor influencing the work with natural products. From the assay perspective, one challenge is that screening methods are run under experimental conditions that jeopardize the proper identification of actives within the complex natural mixtures, thus rendering more false negatives, as some active components can be present at very low concentrations [8]. Another problem is that several natural compounds are known to aggregate in biochemical buffers, causing nonspecific inhibition of unrelated enzymes. For instance, unexpected behavior of natural retinoids in a filter-binding assay to bovine and reindeer β-lactoglobulin was shown to be explained by detecting aggregate formation [9]. They can also cause interferences with optical detection methods, as they can display intrinsic fluorescence or cause light scattering, among other things.

Therefore, in this review, several such methodological challenges will be discussed and concrete approaches will be recommended to circumvent them. To exemplify how these methodological challenges can be encountered in a relevant area for natural product researchers, we selected the discovery of antibacterial drugs. This review does not pretend to be comprehensive, but it does aim to serve as a first-hand tool for natural product researchers to recognize problems during screening campaigns and adopt specific resolution strategies.


Development and Validation of Screening Assays

The general workflow

The development and implementation of appropriated assays are indispensable steps prior to the performance of chemical screens. These steps are performed within the early drug discovery phase commonly refereed as “Target-to-Lead”. The general workflow of the “Target-to-Lead” phase is schematically represented in [Fig. 1], while specific issues that need to be considered during the discovery of natural origin drugs are summarized in [Table 1]. Most of the issues that will be discussed in this review are applicable across all variants of performed screens (from high to low throughput). However, an intentional emphasis has been put into considerations that are applicable to MTS as this corresponds with the most frequent type of screens currently performed in academic environments worldwide. Indeed, with the growing engagement of academic centers in the performance of MTS (and, in many cases, also of HTS), the term “drug screening” has also become less used and the more general “chemical screening” has been adopted instead, as this is not biased towards a therapeutic goal and it better reflects the wider interests of the academic community [10].

Zoom
Fig. 1 Workflow describing the general steps of the HTS process (red boxes) within the overall process of discovering new drug candidates. (Color figure available online only.)

Table 1 Key aspects of the HTS assay optimization, validation, and implementation process specific to the early drug discovery of natural products.

General considerations

Some key issues NP

Discussed here

HTS assay development

Assay configuration

  • Some technologies can be more likely to fail when applied to NP.

  • Some plate materials can cause increased adsorption of NP mixtures.

Assay conditions

  • Low concentrations of NPs in complex mixtures can challenge detection limits of the assays.

  • Compatibility of specific solvents used for natural products need to be tested.

Automation

  • Pipetting protocols and automated handling of natural products need to be finely tuned.

Section 3.1

Miniaturization

  • Amounts of NP or extracts availability can be very limited, driving the need for miniaturization.

Section 3.1

HTS statistical quality

  • Control of statistical quality is imperative in HTS but lacking in many NP screening studies.

Section 2.2

HTS assay implementation

Identification and elimination of interferences

  • Common interferences due to colloidal aggregation of natural compounds need to be addressed.

  • Methods need to be chosen to exclude optically interfering natural compounds.

  • Chemically reactive NPs can cause false positives or negatives.

Section 3.2

Lead refinement

Selection in hit-to-lead process

  • Structural complexity can hamper chemical synthesis or the structural optimization process.

The “Target-to-Lead” phase ([Fig. 1]) starts with the selection of a target, which is greatly influenced by the relevance to the disease in question and the chemical tractability. At this stage, striking a balance between the novelty and the validity of the chosen target is an important goal since this will ultimately determine the type of drug that may be discovered – a new type of drug or a drug similar to an existing one. Target novelty and validity generally have an inverse relationship. Validated targets with known disease relevance generally lack novelty, while on the other hand novel targets typically require further validation studies (i.e., cell-based and animal trials) to ensure that screens provide meaningful results and lead to the discovery of clinically valuable compounds [7]. Using drug discovery strategies with both types of targets have pros and cons, and concrete examples of them will be given in section 4 of this review when discussing the discovery of natural antibacterial compounds.

Upon selecting a target, an assay method needs to be chosen so that the activity of the test compounds against the target is measured either in the absence or presence of cells. The development process ideally aims at providing an assay that combines simplicity with good sensitivity, reproducibility, and accuracy, as well as amenability to automation and reasonable price [11], [12]. In terms of simplicity, the goal is to develop assays that do not have many steps (not more than 5–10) and only necessitate simple operations, with a preference towards assays that only involve additions, otherwise known as homogeneous. The sensitivity of the assay must be sufficient to permit the identification of low potency or low efficacy compounds. The accuracy refers to the ability of the assay to provide trustable results as assessed via control compounds with known pharmacological effects on the measured targets, while the reproducibility of the assay relates to the stability of the response that is offered by the assay, to ensure that similarly trustable results are obtained when the assay is performed in different wells, plates, days, and by different human operators or using different types of laboratory equipment [13]. The reproducibility of the assay is also tightly connected to the assay reagents, both chemical and biological. These need to be stable and no significant changes in assay components should be detected, especially in cases of non-commercially available reagents that may be prepared in-house in different batches, such as recombinant proteins or cell suspensions.

Assuming that the target is chemically tractable, it is fairly safe to say that most assays can be engineered and conditions can be optimized so that compounds modulating the target are found in the tested collections. Examples of the conditions needed to be optimized for both biochemical and cell-based assays are presented in [Fig. 2]. To follow the optimization process and test the effects that these different conditions have on the quality of the assay, statistical analysis are performed, which will be discussed in the next chapter.

Zoom
Fig. 2 Conditions that typically need to be optimized during the HTS assay development process. (Color figure available online only.)

When it comes to academic screening centers or groups, they have been particularly successful in the development and adoption of a cell-based, phenotypic screening (recently reviewed by [10]) as a strategy to cover diseases that have been neglected by pharmaceutical companies and access a wider spectrum of targets [14]. The development of high-quality cell-based assays combined with the selection of well-designed academic libraries (not necessarily larger ones) has translated into the discovery of highly active molecules that maintain a very high activity in vivo as well (i.e., [15]).

Academic institutions have also contributed with innovative studies incorporating HCS, not only during the primary screening phase but also for the determination of the mode of actions of actives (i.e., [16]). The complex data processing, which was the major drawback for the use of HCS, is now being overcome by the scientific community with the project Open Microscopy Environment that has provided entirely open solutions, such as OMERO [17]. OMERO allows for the open analysis of complex multidimensional image data as well as the storage and handling of scientific image repositories. Such tools have allowed HCS data to be exchanged between multiple platforms and be free of dependency from closed and often expensive commercial solutions [10]. Furthermore, another interesting feature of academic screening has been the implementation of whole organism screens employing nematodes or zebra fish, which in some cases have been applied in conjunction with HCS [18]. More examples of this type of screen will be further discussed in section 4.


Statistical analysis

Assay performance and assay sensitivity have been suggested by authorities in the field as the two crucial aspects that should be monitored during the development of screening assays [11], [19]. In general, an assay is thought to perform well if it provides an adequate distinction between maximum and minimum signals, which, in other words, indicates that the assay can effectively differentiate between the non-actives that largely populate chemical collections and the less frequently existing actives. Also, a good reproducibility needs to be demonstrated. For this purpose, certain statistical parameters characterizing the assay performance are calculated. In large pharmaceutical organizations, they are an essential part of the routine operations performed in HTS laboratories, and a considerable amount of literature has been dedicated to them. However, in academic groups, especially in those dedicated to the MTS of natural products, they are less known and their application is consequently much less reported. In [Table 2], a summary of the relevant parameters needed to assess the performance of screening assays is presented.

Table 2 Statistical parameters needed to characterize the performance of HTS assays.

Parameter

Equation*

Target value

Comments

Reference

* In all equations µmax, µmin and SDmax, SDmin refer to the average and standard deviations, respectively, of the maximum (max) and minimum (min) signals in the screening assay

Screening window coefficient

0.5–1.0

Takes into account the means as well as the dispersion of the assay signals. Z′ is measured only with controls; Z is measured in the presence of library compounds.

[20]

Signal to noise

The higher, the better

Calculated using only control compounds; measures how robustly maximum and minimum signals can be differentiated

[21]

Signal to background

> 2

Calculated using only control compounds: ratio of the maximum and minimum signal means. If the CVs of the signals are under 10 %, the S/B can be lower.

[20]

Coefficient of variation of the signals

< 15 %

Indicates the variation of the signals and can be calculated to compare signals between plates and days.

Coefficient of variation of the assay

< 20 %

The calculation can be applied when SDmin is lower or equal to SDmax.

[22]

Signal window

> 2

Indicates the significant signal between maximum and minimum signals, which is used to characterize the dynamic signal ratio of the assay.

[20]

Since first being published in 1999 by Zhang et al. [20], Z′ has become one (if not the most) useful tool to estimate assay performance. Based upon the assumption that the signals are binomially distributed (99 % of their data points fall within the ± 3* standard deviation limits), Z′ estimates if a statistically significant separation between the extreme assay responses (maximum and minimum) exists in the assay. When the Z′ value is over 0.5 (or 0.4 in the case of cell-based assays), the assay can be considered a good performing screening method. Conversely, Z′ is close or equals to 0 when the extreme signals of the assays overlap, which is an indication of a poor quality assay. As seen in [Table 2], the calculation is very simple, and it is independent of the assay format or the technology being used. In fact, Z′ is a dimensionless parameter that can be used to compare between runs performed in different conditions. For calculating Z′, we first recommend performing normality tests on the control samples with any reliable statistical software package to ensure that the prerequisite of a binomial distribution is fulfilled. Then, a second step would focus on preparing and running assay control plates under different assay conditions ([Fig. 2]), that is, plates containing only control samples (maximum and minimum signals), to identify and establish the assay performance and variability across different wells, plates, and days. At this stage, reagent stability should also be studied to exclude other possible sources of errors during the assay implementation.

The same data generated during this initial optimization run permits the calculation of S/B and S/N ratios. S/B as a stand-alone parameter has a limited value since it does not take into account the signals variability. On the other hand, the S/N ratio, when calculated based on the modified equation published by Bollini et al. [21], does provide a good assessment of both signal window and signal variability. In any case, both parameters are nowadays typically used in conjunction with Z′.

The Z′ equation can also be rewritten to make the relationship with the S/B ratio and the signal variations expressed as CVs more evident, as correctly indicated by [7]:

Z' = 1 - 0.03 · ((|{S/B}| · CV_max + CV_min) / (|S/B| - 1))Zoom

According to this equation, when the variations of the signals measured by the CV % are low, the S/B ratio can also be low and the quality of the assay will still be kept high. Thus, although in principle it is indicated that the S/B ratio is recommended to be higher than 2 ([Table 2]), an assay with low variability (for instance with a CV of 5 % for both signals) will result in an assay with a Z′ of 0.55, even if the S/B is only 2. The measurement of these parameters is crucial to identify the best experimental conditions, but also once the assay is developed, they also allow for tracking any unexpected change in performance during the assay implementation stage. Thus, during screening campaigns, Z′, along with the other statistical parameters, should be calculated for every screened plate. It has been demonstrated that although Z′ and SW measure the same properties of the assay, that is, assay signal differences and variability, Z′ is still a better choice [22]. In our opinion, a more systematic utilization of these parameters in natural product screens will be a highly beneficial practice for the field.



Examples of Specific Problems Encountered During the Development and Implementation of Screening Assays of Natural Products

Optimization of conditions related to the assay protocol

When optimizing and automating an assay it is essential to define key aspects of the different assay protocols, i.e., the steps and processes that are to be performed. This helps in identifying the potential bottlenecks prior to initiating the automation trials, as well as other potential limiting steps [23], [24].

Fully automated systems are capable of performing full assay procedures unattended, from the test compound management through the sample preparation and sample analysis to the data processing. Usually, they utilize a centralized robotic arm, which integrates compound libraries, several liquid handling stations, analytical devices, plate incubators, and stackers to the system. These systems are best suited for high compound numbers assayed with fairly simple protocols. On the other hand, semiautomated systems comprising a single liquid handling workstation that may be used in combination with a liquid dispensing unit are more flexible for manual interruptions and reprogramming, and therefore are usually used in executing sections of more challenging assay protocols. An example of a semiautomated protocol in comparison with a manual protocol is presented in [Fig. 3] and further detailed in [Fig. 4]. These semiautomated approaches are generally more accessible to academic researchers as they may not require massive infrastructure investments and are therefore less costly to implement.

Zoom
Fig. 3 General view of a typical protocol flow, exemplified with a manual vs. semiautomated comparison of an antimicrobial assay.
Zoom
Fig. 4 Detailed view of the protocol flow (presented earlier in [Fig. 3]) of a manual (A) vs. semiautomated (B) antimicrobial assay, highlighting the concepts of the steps and processes. The processes are only indicated in the semiautomated assay, within thicker boxes, while steps are shown within thinner boxes. (Color figure available online only.)

Assay protocols (in semi- or fully automated systems) taking into account LUO thinking helps to provide an understanding of the relationship between the engineering theory and performance of actual experimental laboratory operations [25]. A specific sequence of unit operation is called a “process” ([Fig. 4]) and may include one or several individual operations referred to as steps (e.g., the addition of cell suspension, removal of culture medium, incubation, washing or staining steps). This practice improves the project planning, quality, and integrity of assay protocols, and assures valid interpretation of results from data analysis.

Incubation steps do not typically change between manual or automated assays, and generally the only precaution would be to ensure that there is enough space in the incubators to accommodate the larger amount of plates that may be generated in an automated assay. On the contrary, the liquid handling is probably the most demanding aspect that needs to be addressed, as it highly affects the quality of the results obtained, and therefore the liquid handling parameters have to be adjusted to acquire acceptable precision without interfering with the assay system. Some examples are presented in [Table 3].

Table 3 Some key liquid handling parameters to be taken into account when performing an assay and troubleshooting strategies, exemplified with the semiautomated assay detailed in [Fig. 4 B] (the numbering of processes and steps refers also to the LUO shown in [Fig. 4 B]).

Process

Step

Potential problem

Critical parameter*

Solution

* Parameter clarification: SH – shaking, TR – tip refreshing, DS – dispensing speed, AS – dispensing speed, TH – tip height

1

1

Suspension uniformity

SH

  • Prior shaking (and systematic, during protocol, if needed).

Contamination

TR

  • Tip refreshing steps need to be added.

Mechanical stress on cells

DS

  • Fine-tuning of the tolerable dispensing speeds.

2

4

Cellular detachment during media removal

AS, TH

  • Aspiration without touching the bottom of the wells (for attached cells).

  • Aspiration speed and tip height need to be optimized.

5

Cross-contamination within the plate

TR

  • Tip refreshing steps need to be added.

6

Stains outside the reaction wells

TH

  • Dispensing height has to be carefully optimized.

  • Tip refreshing steps may be added.

  • Potentially overstained wells are tracked.

Critical steps concern general tolerance of the targets to mechanical stress caused by pipetting, the dispense speed of all components (e.g., too high of a dispense speed could cause the fluid forces to interfere with fixation of cells or proteins), removal (i.e., aspiration) of the supernatant after fixation, and contamination hazards. Contaminations should be carefully detected and minimized by choosing the most appropriate tips and liquid handling settings for each reagent [26]. On the other hand, the mixing efficiency is dependent on the correct combination of liquid volume and method used. When working with cells, it must be remembered that stirring too vigorously may cause disruption and impair functionality. Evaporation of liquids may become a problem during prolonged incubations (if the plates cannot be sealed). Cellular studies are normally run at + 37 °C, which promotes evaporation. The assay volume determines the plate format and the choice of liquid handling device [27].

By automating the assays, the number of variables that can be controlled increases significantly to detail levels that are not possible to achieve by even the most skilled researchers. Specific variables, such as the distance from the bottom of the well at which the pipette tip is to be positioned in every dispensed step, the dispensing or aspiration speed, or the blowout technique, would be impossible to control manually [27]. Fine-tuning and controlling these variables may in some cases not necessarily translate into an assay that performs better, but it does generally result into an assay with better reproducibility from plate-to-plate and day-to-day, as exemplified in [28].


Detection and elimination of interferences during HTS implementation

Interferences due to colloidal aggregation: Colloidal aggregation occurs via self-association of organic molecules in aqueous buffer solutions and around 95 % of hits in screening campaigns have been attributed to unspecific, aggregation-based inhibitors [29]. The widespread recognition of colloidal aggregation as a main cause for the occurrence of false positives is greatly attributed to the work of Shoichet and coworkers (see references below [29], [30], [31], [32], [33], [34]). They established that many nonspecific inhibitors or “heavy hitters” self-associate in biochemical buffers, forming spherical particles of various diameters (300–1000 nm) that are detectable by DLS and TEM.

Criteria for compounds to be deemed aggregators have been proposed to include time-dependence inhibition of targets, quick inhibition reversal (within seconds) upon the addition of detergents, inhibition being strongly dependent on experimental parameters (e.g., pH, enzyme concentration, protein concentration, and ionic strength), and the occurrence of steep concentration-response relationships [30], [31]. Because of this, increased Hill coefficients are generally thought to be reliable predictors of aggregation-based inhibition [29]. Aggregators have been postulated to directly interact with target proteins causing partial protein denaturation [32] or to sequester proteins leading to a reduced accessibility of the substrate [33]. However, the exact molecular mechanisms taking place are still being investigated. At a typical screening concentration of 5 µM, about 1–2 % of compound libraries with ”drug-like” properties have been estimated to behave as aggregators, and at 30 µM that percentage has been shown to increase to 19 % [34]. In both scenarios, the prevalence of aggregators is relevant when considering the typical hit ratios (< 1 %) of screening campaigns.

Natural products were within the aggregate formers that were first reported by Shoichetʼs laboratory around 10 years ago. McGovern and Shoichet [31] analyzed 15 nonspecific kinase inhibitors and showed that eight of them were aggregate formers, from which three molecules were of natural origin ([Fig. 5]). They were the very well-known flavonoid quercetin, indirubin, which is present in Indigofera tinctoria and claimed to be an antitumor [35], [36], and rottlerin, another phenolic compound naturally existing in Mallotus phillipinensis (“Kamala” tree) and reportedly active as an opener of potassium channels (BKCa++) [37]. Subsequent investigations confirmed these results and established the promiscuous enzymatic inhibition profiles of these natural compounds [38], [39].

Zoom
Fig. 5 Three of the first reported aggregated-induced promiscuous inhibitors of natural origin. A quercetin, B indirubin, and C rottlerin. (Color figure available online only.)

Recently, a systematic study of the occurrence of colloidal aggregation among purified natural molecules was performed [40]. These authors screened a small but representative subset of natural phenolic compounds (117) and found that the proportion of aggregating compounds was around 12 % when they were tested at a concentration of 10 µM. They showed that flavonoids were more aggregation prone than other phenolic compound classes such as coumarins and organic acids. In fact, all of the studied flavonoids (23) formed DLS-detectable aggregates in at least one of four different tested conditions. The occurrence of aggregates, however, did not automatically translate into unspecific inhibition and only two flavonoids (quercetin and rhamnetin) were identified as promiscuous [40]. The study, however, gave the foundations for another equally plausible scenario, that aggregation could also lead to false negatives by reducing, for instance, the concentration of the compounds available in the solution. Based on all these findings, a necessity has surged of acknowledging aggregate formation as a likely source of either false or negative hits when screening natural compound collections.

A first step to exclude false positives after performing a primary screening campaign is to check if any of the identified natural hits have been previously flagged as aggregators. For flavonoids, we recommend checking on the list compiled in [40] (available at: http://www.mdpi.com/1420–3049/17/9/10774), while for other types of natural compounds, the list of aggregators that is maintained by Shoichetʼs laboratory could be consulted (available at: http://shoichetlab.compbio.ucsf.edu/take-away.php).

In biochemical assays, a simple experimental way to preclude interfering aggregators is by retesting their inhibitory effects in the presence of non-ionic detergents. If the inhibition is significantly attenuated by small amounts of non-ionic detergent, the compound is likely to act via aggregation. Detergents (0.01–0.1 %) have been proposed to disrupt aggregate formation as well as dissociate the protein-aggregate interaction. Feng et al. [29] performed a large detergent-based campaign with more than 70 k molecules and concluded that inclusion of 0.01 % Triton X-100 effectively reverses the promiscuous inhibition caused by more than the 95 % of the aggregators. This strategy can be optimized to perform well in either lower throughput (i.e., cuvettes or 6-well plates) or higher throughput formats (i.e., 96-, 384-, or 1536-well plates). Other detergents such as Tween-20, CHAPS12, saponin 10, and digitonin have been shown to be applicable as well [41]. Experiments can be run separately in the presence and absence of the detergent. A precautionary note is that when using the detergents, they should be preferably added to the buffer before any other component. The introduction of detergents in many different assay formats has been proven possible without compromising the assay quality [41]. In assays that cannot tolerate non-ionic detergents, for instance, in cell-based assays, a suggested possibility has been to use 1 mg/ml of bovine serum albumin (BSA) instead [30], but this molecule can sometimes sequester non-promiscuous hits [34] or cause other types of interferences [42].

Although the detergent sensitivity concept is widely applied for excluding unspecific inhibitors, other methods have been implemented that allow a direct and noninvasive quantification of the formed aggregates. Among them, high-throughput screening assays based on SPR using Biacore technology [39] and PC biosensor microplates [43] have been reported. From these studies, it has also become clear that some of the interactions between aggregates and protein targets are spontaneously reversible and this can add additional complexity to the process of flagging and removing aggregators from natural libraries.

Other features of the functional behavior of hits can similarly help in distinguishing the “false” (or promiscuous) from the “true” ones. For instance, if an inhibitor is found to display a competitive kinetic mechanism, this compound can be regarded as less likely to be an aggregator [34]. This consideration is based on the structural similarities that competitive inhibitors typically share with the substrate and their ability to recognize specifically the catalytic sites, which differs from the unspecific nature of the aggregates-induced inhibition. On the other hand, the preservation of inhibitory activity after spinning compounds for several minutes in a centrifuge also indicates that aggregates are not being formed.

Apart from experimental approaches, attempts have been made to characterize the physical-chemical properties of the aggregators on the hopes of applying in silico calculations to predict the aggregation potential. Two features have been suggested to potentially distinguish between aggregators and non-aggregators: clog P and aqueous solubility. Based on these features, from of a set of 111 compounds, a valid distinction in more than 80 % of the cases was done, with aggregators exhibiting higher clog P and lower aqueous solubility [44], [45]. A better prediction (correct in over 90 % of cases) has further been achieved by a more complex recursive partitioning model [44].

We have, for instance, mapped the chemical space occupied by aggregators and non-aggregators using ChemGPS-NP, a freely available chemography tool [46], [47], applicable to natural compounds ([Fig. 6], unpublished results). The used aggregators and non-aggregators (96 compounds) have been obtained from the publicly available repositoire of Shoichetʼs laboratory, mentioned earlier. Regions of the chemical space, as defined by a combination of descriptors characterizing molecular size (PC1), aromaticity (PC2), and lipophilicity (PC3), have been seen to overlap between aggregators and non-aggregators. As pointed out by other authors [44], the selection of proper descriptors and development of simple models that could accurately predict the aggregating behavior of compounds is a challenging task. We believe this is an area in which more research needs to be performed in order to facilitate follow-up studies during the reconfirmation stage of a large number of primary hits.

Zoom
Fig. 6A Workflow of the in silico process used for mapping of the chemical space of aggregators and non-aggregators. B 3D representation of the chemical space of aggregators (blue dots) and non-aggregators (red dots), using the Principal Component Analysis (PCA)-based chemical space navigation tool ChemGPS-NP (unpublished results). As schematically represented in A, the analysis uses 2D descriptors (35) describing the physical-chemical properties of the compounds that are calculated from SMILES. Salts, hydration information, as well as counter-ions are excluded from SMILES. For analysis of the chemical space, the first four dimensions (PC1–PC4) are plotted using the software Grapher 2.1 (MacOS X, US).

Optical interferences: Crude extracts have a maximized chemical diversity and do not require any purification steps, but in order for the activity to be detected, they often need to be screened at higher concentrations due to the low concentrations of their active components. A major drawback of screening concentrated crude or semi-purified extracts is that color interference, autofluorescence, or light scattering by particulated samples (as those present in lab dust) can occur, which generates false positives and negatives. Similarly, colored and/or autofluorescent pure compounds can cause artifactual results. Indeed, many natural compounds are rigid and planar and possess multiple conjugated aromatic moieties, which increase the probability of endogenous fluorescence [48]. This is the case in widely distributed natural molecules such as coumarins, anthraquinone derivatives (for instance, hypericin, present in the alcoholic extract of Hypericum perforatum) and pigments such as carothenes, chlorophyll, or chlorophyll breakdown products such as phaeophorbide A [49]. Additionally, the aging of samples can result in the formation of degradation products, which can be strongly light absorbing compounds even in the visible range (400–700 nm) [50].

In these cases, the compound spectral properties cause interferences with the light detection step of the screening assay, and they are mostly predominant in assays that are run in absorbance and fluorescence (FI, FP, and FRET) modes [51]. Such interferences are manifested by a typical increase in the background signal of the assay but also by the participation in unwanted FRET with the assay fluorophore [48]. Given that a vast majority of screening assays is nowadays run with absorbance- or fluorescence-based technologies, these issues cannot be ignored. Moreover, the increased use of homogeneous assays also accentuates these problems as the test samples remain in the wells during the entire duration of the assays.

Because these interferences are technology dependent, suggested solutions typically involve changes in the protocols or ultimately in the detection methods. The simplest strategy for dealing with minor optical interferences is to include one step in which the absorbance or fluorescence of the interfering molecules is measured in the absence of any other reaction component, which is then subtracted from the signal detected in the real biochemical or cell-based assay. However, in many cases, the compound fluorescence can be higher than that of the fluorophore, even at relatively low concentrations (10 µM) [52], and this strategy is, thus, not sufficient. The problem is additionally aggravated when higher concentrations of extracts or pure compounds (> 10 µM) are tested in cell-based assays, since different cell types can display endogenous fluorescence in various conditions. This solution is also not operational for unwanted FRET artifacts, which are more difficult to identify and correct. In the literature, a common bypass of this issue has been to exclude optically interfering compounds from follow-up studies, but in our opinion this strategy hampers the identification of natural scaffolds that could have otherwise held promise as starting points for lead refinement strategies [48]. A better solution is offered by simple mathematical procedures that can be applied to correct for both increases and decreases from the baseline caused by interferences from test compounds, as described in [50]. Another contribution focuses on different strategies that can be applied to tackle this type of interference, specifically in FRET-based assays [53]. However, even if data can be corrected with these procedures, the data would eventually need to be rejected in cases where the interferences result in more than a 2-fold change in the signal [50].

A second-tier strategy that we can recommend is the utilization of fluorescence assays that rely instead on red-shifted dyes or longer wavelength tracer fluorophores to avoid spectral overlap with organic compounds that absorb in the ultraviolet region or other autofluorescent molecules (such as coumarins). Several dye classes have been developed in recent years with absorption maxima beyond 520 nm, extending to nearly 800 nm, from which it is possible to select for nearly all types of assay applications. The suitability of this approach for natural product screening has been documented. Red-shifted fluorogenic substrates have been shown to reduce interferences during the screening for protease inhibitors from natural extracts, from prokaryote, fungal, and plant sources, as well as pure natural compounds [54]. Also, an FP assay using red-shifted dyes has been developed to screen for kinase inhibitory activity resulting in significantly less interferences from constituents of microbial extracts when compared to a fluorescein-based competitive FP assay or a [33P]ATP Flashplate assay [55]. However, this strategy is also not exempt of limitations, as it may not prevent the interferences from red- and far-red emitting pigments such as chlorophyll and other naturally occurring porphyrins.

A third strategy for overcoming these limitations is the implementation of methods that entirely preclude the use of chemical labels, otherwise known as label-free. Particularly in cell-based assays, label-free methods rely on impedance-based measurements to detect changes in the electric properties or the passage of ions through the cells. These changes can be brought about by cellular changes in the attachment to electrodes located in dishes, for instance, in 96-well microplates or by the activity of different receptor types (i.e., GPCRs, tyrosine kinase) [56], [57]. Beneficial aspects of these methods include that they can be applied without any restriction in colored and autofluorescent samples, they are noninvasive and they offer continuous readouts and a simultaneous view of short- and long-term cellular events with very minimal labor involved. Moreover, because cells are not stained, fixed, or altered in any way at any point, samples can then be interrogated for the presence of metabolites or for other responses using chemical labels. Such an additional interrogation allows for obtaining a multicomponent response from a single culture of cells, which diminishes biological variability. Currently available label-free technologies, for instance, Electric Cell-Substrate Impedance Sensing (ECIS, from Applied Biophysics), Epic System (from Corning), or xCELLigence (from ACEA Biosciences), were originally accepted with a very slow pace in the drug discovery scenario, but over the last ten years they have increasingly attracted interest as their throughput and robustness have increased.

Until now, label-free methods have been shown to be excellent tools for tracking cytotoxic effects, also in the case of natural samples. Investigations have been performed on the cytotoxicity of a large collection of extracts from Bangladeshi traditional medicinal plants against pancreatic and breast cancer cells [58], [59], [60]. These studies have followed a tiered screening approach in which the first screen is performed in all samples using a label-free PC biosensor, followed by two other follow-up assays using conventional labels (MTT proliferation and caspase 3-induction). Their approach ensures that none of the tested extracts (more than 55 in two of the studies) would be excluded due to interfering optical signals during the first screen. The PC biosensors are located in 96-well microplates and they produce a highly localized shift in reflected wavelength at the site of cellular attachment that is coupled to an image detection system that scans the biosensor surface and has sufficient resolution to monitor the attachment/detachment of individual cells. Image analysis can be used to study the cellular population of cells in the wells, which can be readily translated to a simple cell count. Using the label-free biosensor assay, researchers have been able to rapidly differentiate and classify the effects on cancer cells of several plant extracts with a previously unknown function [58], [59].

In a recent work, Kling et al. 2013 [61] used the ECIS method to screen for the neuroprotective activity of 19 phenolic compounds such as flavonoids, flavonoid metabolites, phenolic acids, and their methyl esters (including several colored compounds) after induction of oxidative stress with tert-butyl hydroperoxide, and compared their output with a conventional cytotoxicity assay, based on an endpoint measurement with the MTT probe. This work documented the benefits of studying neuroprotection mechanisms via the recording of continuous cellular responses with ECIS. The described method was also particularly advantageous for dealing with compounds like quercetin or kaempferol, which have been shown to interfere with the performance of redox probes, such as MTT [62], [63], [64].

In our laboratory, studies have been performed on the cytotoxicity of several natural extracts, and one of them (coded NP1, unpublished results) has offered a challenge, as this plant extract interferes with many commonly used viability assays. For instance, NP1 reduces resazurin in the absence of cells, likely due to redox-active constituents, and it also increases the signal of calcein and ethidium homodimer 1 (EthD-1; components of the commercial LIVE/DEAD viability/cytotoxicity kit), which to a certain extent could be explained on the basis of the extractʼs autofluorescence. Thus, NP1 can cause an overestimation of the cell viability, leading to false negative results.

These problems are circumvented when the label-free ECIS assay is applied for the cytotoxicity measurement of crude extracts. In [Fig. 7], the impedance curves recorded with ECIS showed an overall concentration- and time-dependent cytotoxicity of NP1 (unpublished results). In this assay, increases in the impedance are associated to cellular attachment while decreases are caused by toxic insults that can result in the loss of the cellular integrity of the cellular monolayer formed on the ECIS electrodes, as exemplified by the addition of NP1 (200 µg/ml, [Fig. 7]).

Zoom
Fig. 7 Impedance changes recorded by ECIS in GT1–7 cells treated with a crude plant extract (coded NP1) at different concentrations (50, 100, 200 µg/ml). ECIS instrumentation model Z (Applied Biophysics) was used. 8W1E electrodes were pretreated with 10 mM of cysteine as recommended by the manufacturers. Electrodes were filled with 400 µl of DMEM, and impedance recorded at 16 kHz every 60 seconds for 1 hour. Cells (400 µl, 4 × 105 cells/ml) were then added and impedance continued to be measured. Twenty-four hours after adding the cells, 40 µl of the medium was replaced by 40 µl of the NP1 or medium in the untreated control wells.

Specific advantages of using ECIS in this type of study are readily noticeable. The first one is related to the continuous nature of the cellular responses that are measured. For example, after adding NP1 (100 µg/ml, [Fig. 7]), a sudden decrease of about one-fourth of the impedance values compared to untreated cells is detected, but within a few hours, a recovery of the cells is recorded and after 40 hours, less than a 10 % decrease of the impedance values is detected. This indicates that early toxic events triggered by NP1 are temporarily buffered by the cells, which would have been overlooked in a conventional cytotoxicity assay performed only at a single time point (for instance, at 48 hours). Another advantage of the ECIS studies is the possibility of recording delayed cytotoxicity events. In [Fig. 7], it can be seen that after 48 hours, the cells can tolerate NP1 (100 µg/ml) with less than a 25 % decrease in impedance values compared to untreated cells. However, as time proceeds, the impedance steadily drops and a clear toxicity is seen after 72 hours with an over 50 % decrease in impedance values. This delayed toxicity would have also been undetected had the measurements been conducted only using acute cytotoxicity assays.

Label-free methods, on the other hand, are not exempt of disadvantages. Among them are the difficult interpretations of the results and the lack of full understanding of the biological significance of some measured signals [65], [66]. These two elements seem to relate to the biophysical nature of the measured responses that are mostly associated with complex physical responses and are not always fully elucidated from a biochemical perspective. Also, the utilization of these methods by the academic community has been hampered by their high consumable costs. However, upon identifying natural compounds or extracts with optically interfering compounds, these methods could offer a new alternative for further investigation of some biological activities. Based on this, it is our view that more dedicated research in label-free bioactivity screens of natural products will benefit drug discovery.

Interferences due to the meniscus effect: Optical signals can also be attenuated by reasons that are not associated to the optical properties of the compounds, but instead by the surfactant-like physical properties. Although these interferences are not as frequent as the ones described above, some natural samples containing surfactants such as saponins can cause a deepening of the meniscus of the liquid in microtiter well plates, which results in a decrease of the path length in the liquid column that can reduce the amount of absorbed light being measured through the liquid column. This interference typically results in a lower signal when running absorbance-, fluorescence-, and luminescence-based assays in top-reading instruments. Methods for correcting them have been described recently [50].



Antibacterial Screening of Natural Products As a Case Study

Having discussed general aspects of the development of screening assays and implementation stages, we will now focus on an area in which natural products offer a successful case study: antibacterial drug discovery. Despite the immense impact that antibacterial drugs have had on overall life expectancy and quality, bacterial infections remain a persistent health burden both in industrialized countries and the third world [67]. Considering the aspects of both the bacterium and the host, the two obvious challenges in treating bacterial infections are the selection of resistant mutants and the increasing number or opportunistic infections among immunocompromised persons [68], [69].

Antibacterial agents have been the focus of natural product research for several decades, and continue to be among the most widely screened biological activities among natural product scientists. One evident reason for this is the success of natural products as antibacterial drugs and drug leads. According to a recent report by Cragg and Newman [4], 75 % of all new small molecule antibacterial agents approved by the FDA between 1981 and 2010 were of natural origin. However, antibacterial drug discovery in general is currently a matter of major concern due to low success rates, and several pioneers and professionals in the field have pointed out the need for reevaluating the screening strategies to reconcile our research practices with the demands set by resistance, persistence, and opportunistic infections [70], [71], [72], [73]. The wide variety of antibacterial assays reflects the extensive efforts to identify new drug molecules within this therapeutic. The rest of this section aims to give a short overview of the different assays available, highlighting some of the points discussed in the previous sections.

Phenotypic assays for bacterial growth inhibition

The classical antibacterial screening strategy has relied on exposing bacterial cultures grown on agar plates to test samples and measuring the inhibition zones (areas with no bacterial growth) around the samples. This methodology was used during the discovery of virtually all antibiotics during the 1960s, many of which still form the basis of our antibacterial drug arsenal [74]. It is still widely used for low-throughput bioactivity studies and has been recently applied, among other studies, to the evaluation of plant extracts selected for ethnobotanical use in different areas [75], [76]. The popularity of the this assay for screening the activities of plant extracts most likely reflects tradition rather than rational selection of an assay method least prone to interference with the test material. As recently pointed out by Gertsch (2011), the scientific knowledge needed for choosing the biologically most relevant means for studying the biological activities of complex plant-based mixtures, in which synergism, for instance, could take place, is currently missing to a large extent [77]. The ethnobotanical studies typically involve some tens of samples in maximum, reflecting the limitations that the method has for screening purposes. Since the assay endpoint relies on a visually measurable inhibition zone, the assay requires large amounts of reagents and laboratory space, and suffers from a slow and labor-oriented readout because of the need for manual determination of the zones of inhibition. Nevertheless, the assay can be miniaturized and automated by using image-based screening platforms capable of quantifying the bacteria-free zones on a much smaller scale [78], [79].

To meet the needs set by increasing sample numbers, various microtiter plate-based assay formats have been established, relying on the measurement of bacterial biomass or metabolic activity of bacteria growing as a suspension [80]. Considering time and reagent consumption per sample, such assays are much better suited for large sample collections or projects with bioassay-guided isolation of the active ingredients within extracts, as shown by the recent identification of novel broad-spectrum antibacterial agents among natural products using a pH-dependent metabolic dye [81] or a study on Pseudomonas aeruginosa using optical density measurements [82]. Integrating such methods with other technologies, such as microfluidics and fluorescence-aided cell sorting, has also allowed for the development of sophisticated platforms of bacterial viability evaluation [83].

In antibacterial assays, potential sources of interference are dominated by the colored or fluorescent nature of the tested samples, since all classical viability probes rely on photometric or fluorometric readouts within the spectral area that is also covered by natural products and other small molecules, as discussed earlier. One practical means for overcoming this type of interference is by the baseline measurement of each sample prior to the bacterial growth phase, allowing for the elimination of potentially increased background values in each sample separately. Another potential source of compound-borne interference is the ability of some redox-active compounds to directly convert viability dyes whose conversion to a colored or fluorescent form relies on cellular oxo-reductive metabolism. If such a feature is suspected in the antibacterial assay, we recommend performing counter-screenings in the absence of living cells to detect any direct interactions between the dye and the samples to be tested.

From a biological point of view, the major difference between the above-mentioned microtiter plate-based assays and the classical agar culture assays is the state of bacterial community. The microtiter plate-based assays rely on liquid cultures of planktonic bacteria, in which the cell population is maintained in a logarithmic growth phase. In contrast, bacteria grown on agar plates grow attached to the solid medium surface, which, as discussed below, is considered to represent a biologically more relevant growth state of most pathogenic bacteria.

More focused or hypothesis-driven phenotypic screening assays have been set up by using reporter gene assays with fluorescent or luminescent bacterial strains. Depending on the specific setup, the reporter systems are used to identify inhibitors of, e.g., bacterial transcription [84] or translation [85]. Technically speaking, the time span of these assays are typically much shorter than in classical growth inhibition assays, primarily for two reasons. First, changes in transcription, translation, or other similar events can be detected within a few hours when the monitored event is the direct target of the inhibitor in question. Second, the use of fluorescent and luminescent reporter enzymes typically yields readouts with a good SW, providing the basis for a good assay performance even with short exposure times. The prerequisite for achieving biologically meaningful screening assays by this approach is the validation of the reporter gene insertion stability within the bacterial genome and characterization of the growth kinetics of the recombinant strain.

Similar to pure growth inhibition assays, autofluorescent natural products may interfere with fluorescent readouts, while luminescent reporter genes do not suffer from such interferences. However, since the light generated by luciferases originates from an enzymatic reaction, any inhibitor, activator, or stabilizer of the reporter enzyme can suppress the amount of light to be detected and may thus be identified as a false positive in any screen utilizing the reporter [86]. The proportion of firefly luciferase inhibitors in an unbiased chemical library has been estimated to be approximately 3 % [87], and screening data on luciferase inhibitors has also been made publicly available (PubChem AID 411). To control false positives or negatives due to reporter enzyme-directed effects, consulting such publicly available data, testing the activity of hit compounds on purified luciferase, and performing a hit confirmation are our recommended approaches. Also, it is worth to keep in mind that luciferase-inhibiting natural compounds may still exhibit genuine bioactivities not related to this property. For instance, the extensively studied polyphenol resveratrol has been reported to inhibit firefly luciferase [88] and should therefore be treated with special caution in luciferase-based assays, but certainly not all biological activities reported for resveratrol can be attributed to luciferase-related artifacts. Therefore, we do not consider excluding compounds like resveratrol from natural product libraries necessary or even beneficial.

With respect to intracellular bacteria, image-based HCS assays provide a more efficient and informative way to alleviate the laborious assay protocols often involving the readout of the results under a microscope. Simultaneous detection of bacterial replication centers and host cell components have been described for different bacteria [88]. While most bioactivity screens utilizing the manually determined intracellular bacteria load have involved less than one hundred samples (e.g., [89], [90]), the screening of 57 000 compounds with an HCS assay using GFP-expressing Mycobacterium tuberculosis and RAW macrophages as host cells illustrates the possibility to potentially increase the throughput associated with HCS platforms [91].

The improved information content achieved by HCS applications in an antibacterial assay context refers typically to either information on bacterial antigen localization, detection of specific stages in the bacterial life cycle, or simultaneous detection of host cell viability. While the two former cases may provide significant benefits in a hypothesis-driven or targeted antibacterial screening, the latter aims to simply exclude toxic compounds without the need for additional counter-screens. Even though the concentration-dependent effects of different antibiotics on bacterial cell morphology are relatively well-known, increasing primary screen information content by achieving mechanistic data from image-based screening platforms is limited by the resolution of fluorescent microscopes due to at least one order of magnitude smaller bacterial cell size compared to mammalian cells. However, the recent work by Peach et al. [92] described the development of a software platform capable of predicting mechanisms of action of antibacterial ingredients within marine natural product extracts using Vibrio cholerae cell morphological phenotypes and a training set of antibiotics with known mechanisms of action.

The nematode worm Caenorhabditis elegans has become an established model for studying bacterial pathogenicity in a whole organism and, more recently, also gained popularity as an in vivo screening model for antibacterial activity [93]. The ability of C. elegans to feed on pathogenic bacteria and the death of the organisms upon the infection has formed the basis for straightforward assays detecting worm viability using live/death dyes. With its size of 1 mm, C. elegans can be dispensed with liquid handling instruments. Academic screens using a C. elegans rescue assay with tens of thousands of compounds and natural product extracts have been carried out on Enterococcus, Pseudomonas, and Vibrio species [94], [95]. The power of C. elegans rescue assays in antibacterial screening is obvious for two reasons: i) it detects not only inhibitors of bacterial replication but also compounds suppressing in vivo virulence of the target bacterium, and ii) it can exclude compounds showing acute toxicity on the host organism or pharmacokinetic properties limiting the penetration to target tissues [93]. While HCS and other phenotypic platforms are, in some cases, limited by the sample numbers they can be adopted to, many academic screening campaigns involve small or medium size collections of compounds or extracts that can easily be tested in assays with a high information content and, thus, we encourage the academic natural product research community to take full advantage of this aspect.


Target-based approaches for identifying inhibitors of bacterial growth

Screening of classical antibiotic targets: Clinically approved antibacterial drugs target only a few bacteria-specific structures, such as ribosomes and enzymes essential for cell wall or folate biosynthesis. Target-based screening assays, both for ligand-binding and functional enzyme assays have been described for all these targets and can be readily used for screening ([Table 4]). A primary advantage of these targets is that their essential nature for bacterial growth is well validated and the proteins in question have been extensively characterized by biochemical and structural biology means. Many of these proteins and assay reagents are also readily commercially available (for representative examples, see references in [Table 4]).

Table 4 Some examples of assays for antibacterial targets with existing antibiotics that are in clinical use.

Target

Assay format

Detection mode

Application

Reference

Penicillin-binding protein

Competitive binding assay

Fluorescence Polarization; labeled penicillin as competing ligand

Screening of pooled small molecules

[136]

Ribosome

Binding assay

Fluorescence quenching of a fluorophore-labeled ribosome

Screening of a small set of soil microbe extracts

[137]

Ribosome

Competitive binding assay

Fluorescence, labeled neomycin

Characterization of aminoglycoside binding

[138]

Ribosome

Competitive binding assay

FRET; coumarin conjugated aminoglycoside and ribosome

Characterization of aminoglycoside libraries

[139], [140]

Topoiso-merase

Enzyme inhibition assay

Fluorescence Intensity; fluorophore-
labeled oligonucleotide as substrate

Screening of small molecule and NP extract libraries

[96]

Dihydrofolate reductase

Enzyme inhibition assay

Fluorescence Intensity; NADH levels determined with resazurin

Screening of small molecules/synthetic and natural

[141]

Dihydropteroate synthase

Enzyme inhibition assay

Radiometric; substrate and product separation by TLC

Screening of pyrimidine libraries

[142]

Many of these targets have been exhaustively tracked by chemical inhibitor screens, and scaffolds of known ligands have been widely diversified by medicinal chemists, reflecting the fact that the vast majority of newly approved antibiotics during the past decades are derivatives or analogues of previously known drugs [71], [73]. However, previous and recent work on these targets illustrate that the target-based approach can also be used for successful identification of inhibitors or modulators that are structurally unrelated to the known antibiotics. Examples in this respect include the identification of several non-beta lactam structured ligands of Neisseria gonorrhoeae, a penicillin-binding protein, and anziaic acid isolated from Hypotrachyna sp. as an inhibitor of topoisomerase 2 (an enzyme also known as DNA gyrase) [96].

Despite the massive amount of work put into the known targets, only a limited number of drug molecules suitable for clinical use are available for certain targets. For example, mupirocin, a monoxycarbolic acid derivative originally isolated from Pseudomonas fluorescens, remains the only clinically approved inhibitor of aminoacyl-tRNA synthetase (ARS; a family of enzymes involved in bacterial tRNA biosynthesis), but its use has been limited to topical applications such as wound infections due to its properties not being suitable for systemic use [97]. Trimethoprim, on the other hand, remains the only approved antibacterial drug targeting dihydrofolate reductase [98]. Identifying new molecules chemically unrelated to the previously used drugs can be considered one valuable approach to broaden our antibacterial drug arsenal.

However, the usefulness of mupirocin and trimethoprim, as well as most other antibiotics in clinical use, is limited by the emergence of resistant bacterial strains, in part due to the accumulation of low-affinity variants of the target protein. Screening of the wild-type enzymes and bacterial strains is therefore not a sufficient means for identifying clinically useful compounds for further development. Instead, including the low-affinity mutants of the protein is recommended to overcome this problem [70]. Other resistance mechanisms, such as compound inactivating enzymes or efflux pumps, can also be included within the screening platforms, which may be of particular benefit when screening plant-derived material. While high-potency growth inhibitors are typically not found among pure compounds isolated from plants, some classical examples of synergistic combinations of compounds found within plant extracts are known [99]. Assay methods and advances in identifying inhibitors of both beta-lactamase enzymes and bacterial efflux pumps have been previously reviewed [100], [101] and are, thus, not covered here in more detail.

Screening of novel targets: It is generally thought that finding more potent inhibitors of known targets will not solve the resistance problem in the long run, and huge efforts have been made in order to find new targets for antibacterial discovery. Genomics analyses of pathogenic bacterial species have indicated that there are 100–200 conserved bacterial genes with no close homologues in eukaryotes [71]. The pharmaceutical industry has invested significant amounts of time and resources to validate the essentiality of these gene products for bacterial replication, to develop screening assays on them, and to conduct large scale screening campaigns on tens of different targets [71], [72]. As a result of these efforts, only a small number of lead compounds have been taken to preclinical studies, and the majority of them have never been taken to clinical trials. Many of the screening assays developed within the process have been published and thus they can be adopted for the screening of chemical collections harboring compounds with more antibiotic-like properties. For example, bacterial ribosome biosynthesis can be targeted by screening inhibitors against a bacterial GTPase in an assay using the isolated bacterial enzymes with generally applicable screening methodologies for GTP level detection [102]. As the target proteins are of bacterial origin, their expression and production in quantities required for HTS or MTS is generally not the rate-limiting factor for screening, and generic protocols for detecting the enzymatic activity of question are often readily available.

As mentioned, success on developing new drug candidates by applying target-based approaches has been limited. A major limitation of the target-based approach is that it gives no information on the compoundʼs ability to penetrate bacterial membranes and thus reach the target site. Particularly with gram-negative bacteria, penetration through the bacterial membranes poses a significant challenge with regards to the compounds physicochemical characteristics and is thus critical for the biological activity first hand [70]. One means for taking this into account in target-based screening is by designing whole cell assays for the targets of interest by comparative screening of wild-type and target-depleted strains. By silencing the target of interest by RNAi or other means, the silenced and wild-type strains of the bacterium are expected to have different susceptibilities towards a small molecule modulator of the target. This approach has been successfully used, for example, in the discovery of new and previously known fatty acid biosynthesis (Fab) inhibitors among natural product extracts targeting S. aureus FabF7FabH [103], [104]. However, the widely studied Fab as an antibacterial target has revealed one additional challenge in target-based approaches: the essentiality of the target may not be directly interpretable based on its conserved nature, since gram-negative bacteria have later been shown to be resistant to Fab inhibitors [105].

Generally speaking, screening with biochemical assays is prone to false positives due to aggregating behavior of some small molecules, as discussed earlier. In fact beta-lactamases are among the most widely studied enzymatic targets in this respect, and according to the data from Shoichetʼs laboratory, hit lists from screening campaigns with this target are dominated with promiscuous inhibitors, with an occurrence reaching 97 % of all screening hits [106]. Similarly, Newton et al. [107] reported a screening assay for the synthesis of mycothiol, an essential Mycobacterium tuberculosis, in which 65 of the screening hits from a collection of 2024 compounds were found to be promiscuous, nonspecific inhibitors or to interfere with the photometric readout.


Assaying bacterial virulence factors

An alternative antibacterial strategy aims at the identification of small molecules inhibiting the activities of virulence factors. One approach, in this respect, is based on the phenotypic assays for virulence factor gene regulation [108], [109], while others have addressed the question via the specific virulence mechanisms [110], [111]. One widely conserved virulence factor especially among gram-negative bacteria is the type 3 secretion system (T3SS), a syringe-like protein complex responsible for exosis of bacterial products. Screening assays for the discovery of T3SS inhibitors have been described on several bacteria, generally applying fluorescent of luminescent T3SS substrates detectable from extracellular space samples [110], [111]. Another virulence factor targeted by recent screening campaigns is bacterial motility, for which a screening assay utilizing a miniaturized version of the classical soft agar method in combination with viability staining to distinguish growth inhibitors from motility inhibitors has been described [112]. In addition, bacteria-specific exoenzymes have inspired the development of target-based screening assays based on the detection of the cleavage products [113].


Screening for anti-biofilm compounds

According to an estimate by the U. S. National Institutes of Health, over 65 % of bacterial infections are nowadays recognized to be caused by biofilms [114]. The main challenge posed by biofilms is their increased tolerance to chemotherapy and the host immune responses, which stems from a variety of factors that include the interbacterial communication networks, the production of the extracellular matrix, and the presence of persisters [114], [115]. Nonbacteriocidal or bacteriostatic approaches for screening anti-biofilm compounds have involved, for instance, targeting signaling pathways [116], [117], [118]. Interest in biofilm research has highly increased during the past few years, and efforts towards producing standardized data have expanded involving both assay development and database integration [28], [119], [120], [121], [122].

Plants have classically not been considered as sources of potent antibacterial compounds [70]. Yet, although no efficient inhibitors of bacterial growth have been discovered from plants, several plant-derived compounds are known which target bacterial populations by other means. Extracts from garlic and Elmleaf blackberry (Rubus ulnifolius), as well as flavonoids isolated from different plants, have shown anti-biofilm activity [123], [124], [125], [126], [127], and in the case of garlic extract, the anti-biofilm activity has also been confirmed in a mouse model [128]. In another study, the anti-biofilm activity of a natural compound originating from garlic was traced to metabolites putatively affecting interbacterial communication and occurred at concentrations that do not affect planktonic bacteria growth [129].

In a screening-compatible manner, a typical procedure to measure biofilm modulating effects is by detecting the biomass of a biofilm grown on the bottom and walls of the wells in a microtiter plate by crystal violet staining. Additionally, metabolic activity of the bacterial biofilms can be determined with viability dyes such as resazurin, and the extracellular matrix can be quantified by a specific dye. Validated screening platforms based on such methods have been described and successfully used for identification of organic small molecules preventing biofilm formation and/or destructing preexisting biofilms (i.e., [130], [131]). Attempts to miniaturize the crystal violet assay in the 96-well format have been successful and, for instance, a semiautomated protocol for crystal violet staining has been discussed earlier as an example here ([Fig. 4]) [28]. To increase throughput, other alternative methods such as an attachment assay described for a Pseudomonas aeruginosa luciferase expressing strain have been developed [131]. In addition, an HCS assay for anti-biofilm studies has been described, based on GFP-expressing bacterial or a combination of fluorescent viability dyes [132], [133]. When compared to crystal violet staining, the HCS assay was stated to be significantly more sensitive in detecting surface attachment and other early events in the biofilm life cycle, or simultaneous quantification of non-biofilm forms of the bacteria [132]. Achieving data on biofilm architecture in a high-throughput format may yield screening hits with characteristics different from those identified with conventional methods, but, to the best of our knowledge, reports on medium- or large-scale anti-biofilm screens using HCS platforms have not been published thus far.

When it comes to selecting the proper measurement endpoint for anti-biofilm screens, several authors have noticed the importance of combining biomass and viability with matrix measurement methods (i.e., [119], [134]. Various lines of evidences have shown that compounds that are regarded as effective when they inhibit biomass and/or biofilm viability could, in some cases, promote overproduction (or maintenance) of the biofilm matrix, which thus facilitates biofilm colonization in the long term [135]. Thus, including matrix detection assays could significantly enhance the understanding of the biofilms responses towards anti-biofilm agents and give a better assessment of their genuine clinical relevance. Also, in our opinion, an essential issue that needs to be acknowledged in antibacterial screening is the fact that neither biofilms nor suspended bacteria exist as an isolate lifestyle. Bacteria dynamically switches between them upon changes in host or environmental conditions, and thus the knowledge of the effects that test compounds may have in one or the other state is essential as well.



Conclusions

The screening of natural products is undoubtedly complicated by their chemical complexity, and even the purified natural compounds are known to be structurally unique and challenging from a bioactivity perspective. However, proper assay validation and implementation can help overcoming these difficulties and lead to the discovery of meaningful natural lead compounds.

Over the last few years, academic screening has become increasingly engaged in chemical screenings, and has come to provide mature and innovative contributions in a way that has reshaped the field, traditionally dominated by the pharmaceutical companies. With the opening of new academic centers worldwide, as well as the launching of large open initiatives (such as EU-OPENSCREEN), a vast array of infrastructures and compound collections have (and will continue to) become available for an even wider community. Thus, it is our goal that some of the strategies discussed here offer methodological guidelines for natural product researchers, as well as encourage others to embrace new efforts in the rewarding path of discovering new drugs from natural sources.


Acknowledgements

The authors thank the Academy of Finland for financial support (WoodyFilm project, decision 264 064; ArtFilm project, decision 272 266) as well as the Drug Discovery and Chemical Biology (DDCB) network of Biocenter Finland. The authors also thank Malena Skogman, Ph.D. and Daniela Karlsson, Ph.D. for kindly contributing some of the drawings and images utilized in [Figs. 1], [3], and [6].



Conflict of Interest

The authors declare no conflicts of interest.


Correspondence

Adyary Fallarero
Division of Pharmaceutical Biosciences, Centre for Drug Research, Faculty of Pharmacy, University of Helsinki
Viikinkaari 5E
Viikki Biocenter, 00014
Finland
Telefon: +35 84 42 83 49 33   


Zoom
Fig. 1 Workflow describing the general steps of the HTS process (red boxes) within the overall process of discovering new drug candidates. (Color figure available online only.)
Zoom
Fig. 2 Conditions that typically need to be optimized during the HTS assay development process. (Color figure available online only.)
Z' = 1 - 0.03 · ((|{S/B}| · CV_max + CV_min) / (|S/B| - 1))Zoom
Zoom
Fig. 3 General view of a typical protocol flow, exemplified with a manual vs. semiautomated comparison of an antimicrobial assay.
Zoom
Fig. 4 Detailed view of the protocol flow (presented earlier in [Fig. 3]) of a manual (A) vs. semiautomated (B) antimicrobial assay, highlighting the concepts of the steps and processes. The processes are only indicated in the semiautomated assay, within thicker boxes, while steps are shown within thinner boxes. (Color figure available online only.)
Zoom
Fig. 5 Three of the first reported aggregated-induced promiscuous inhibitors of natural origin. A quercetin, B indirubin, and C rottlerin. (Color figure available online only.)
Zoom
Fig. 6A Workflow of the in silico process used for mapping of the chemical space of aggregators and non-aggregators. B 3D representation of the chemical space of aggregators (blue dots) and non-aggregators (red dots), using the Principal Component Analysis (PCA)-based chemical space navigation tool ChemGPS-NP (unpublished results). As schematically represented in A, the analysis uses 2D descriptors (35) describing the physical-chemical properties of the compounds that are calculated from SMILES. Salts, hydration information, as well as counter-ions are excluded from SMILES. For analysis of the chemical space, the first four dimensions (PC1–PC4) are plotted using the software Grapher 2.1 (MacOS X, US).
Zoom
Fig. 7 Impedance changes recorded by ECIS in GT1–7 cells treated with a crude plant extract (coded NP1) at different concentrations (50, 100, 200 µg/ml). ECIS instrumentation model Z (Applied Biophysics) was used. 8W1E electrodes were pretreated with 10 mM of cysteine as recommended by the manufacturers. Electrodes were filled with 400 µl of DMEM, and impedance recorded at 16 kHz every 60 seconds for 1 hour. Cells (400 µl, 4 × 105 cells/ml) were then added and impedance continued to be measured. Twenty-four hours after adding the cells, 40 µl of the medium was replaced by 40 µl of the NP1 or medium in the untreated control wells.